2025-07-12 12:58:36.963537 | Job console starting 2025-07-12 12:58:36.981585 | Updating git repos 2025-07-12 12:58:37.040928 | Cloning repos into workspace 2025-07-12 12:58:37.243215 | Restoring repo states 2025-07-12 12:58:37.258435 | Merging changes 2025-07-12 12:58:37.791414 | Checking out repos 2025-07-12 12:58:38.017241 | Preparing playbooks 2025-07-12 12:58:38.711472 | Running Ansible setup 2025-07-12 12:58:43.021412 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-07-12 12:58:43.786744 | 2025-07-12 12:58:43.786967 | PLAY [Base pre] 2025-07-12 12:58:43.804104 | 2025-07-12 12:58:43.804246 | TASK [Setup log path fact] 2025-07-12 12:58:43.834067 | orchestrator | ok 2025-07-12 12:58:43.851834 | 2025-07-12 12:58:43.852015 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-07-12 12:58:43.893703 | orchestrator | ok 2025-07-12 12:58:43.907136 | 2025-07-12 12:58:43.907269 | TASK [emit-job-header : Print job information] 2025-07-12 12:58:43.947035 | # Job Information 2025-07-12 12:58:43.947212 | Ansible Version: 2.16.14 2025-07-12 12:58:43.947247 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-07-12 12:58:43.947281 | Pipeline: label 2025-07-12 12:58:43.947304 | Executor: 521e9411259a 2025-07-12 12:58:43.947325 | Triggered by: https://github.com/osism/testbed/pull/2740 2025-07-12 12:58:43.947346 | Event ID: e7f40800-5f1f-11f0-836d-fb62bbb5ef7a 2025-07-12 12:58:43.954107 | 2025-07-12 12:58:43.954228 | LOOP [emit-job-header : Print node information] 2025-07-12 12:58:44.082049 | orchestrator | ok: 2025-07-12 12:58:44.082360 | orchestrator | # Node Information 2025-07-12 12:58:44.082453 | orchestrator | Inventory Hostname: orchestrator 2025-07-12 12:58:44.082521 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-07-12 12:58:44.082569 | orchestrator | Username: zuul-testbed04 2025-07-12 12:58:44.082612 | orchestrator | Distro: Debian 12.11 2025-07-12 12:58:44.082744 | orchestrator | Provider: static-testbed 2025-07-12 12:58:44.082790 | orchestrator | Region: 2025-07-12 12:58:44.082825 | orchestrator | Label: testbed-orchestrator 2025-07-12 12:58:44.082887 | orchestrator | Product Name: OpenStack Nova 2025-07-12 12:58:44.082920 | orchestrator | Interface IP: 81.163.193.140 2025-07-12 12:58:44.104766 | 2025-07-12 12:58:44.104917 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-07-12 12:58:44.577131 | orchestrator -> localhost | changed 2025-07-12 12:58:44.592394 | 2025-07-12 12:58:44.592570 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-07-12 12:58:45.660336 | orchestrator -> localhost | changed 2025-07-12 12:58:45.675190 | 2025-07-12 12:58:45.675326 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-07-12 12:58:45.953477 | orchestrator -> localhost | ok 2025-07-12 12:58:45.968517 | 2025-07-12 12:58:45.968678 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-07-12 12:58:46.001260 | orchestrator | ok 2025-07-12 12:58:46.019266 | orchestrator | included: /var/lib/zuul/builds/40f5487bf9f54cd38fd17208779020e4/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-07-12 12:58:46.027086 | 2025-07-12 12:58:46.027184 | TASK [add-build-sshkey : Create Temp SSH key] 2025-07-12 12:58:47.154651 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-07-12 12:58:47.154903 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/40f5487bf9f54cd38fd17208779020e4/work/40f5487bf9f54cd38fd17208779020e4_id_rsa 2025-07-12 12:58:47.154962 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/40f5487bf9f54cd38fd17208779020e4/work/40f5487bf9f54cd38fd17208779020e4_id_rsa.pub 2025-07-12 12:58:47.154992 | orchestrator -> localhost | The key fingerprint is: 2025-07-12 12:58:47.155017 | orchestrator -> localhost | SHA256:mZbyF5kW9NNwODTEB//5h3tggztjC6nDMqfZVi7xRZ4 zuul-build-sshkey 2025-07-12 12:58:47.155040 | orchestrator -> localhost | The key's randomart image is: 2025-07-12 12:58:47.155071 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-07-12 12:58:47.155094 | orchestrator -> localhost | | .+*o. | 2025-07-12 12:58:47.155115 | orchestrator -> localhost | | . .+*. | 2025-07-12 12:58:47.155135 | orchestrator -> localhost | | . ooo | 2025-07-12 12:58:47.155154 | orchestrator -> localhost | | + +.. ..| 2025-07-12 12:58:47.155174 | orchestrator -> localhost | | . S =o o ..| 2025-07-12 12:58:47.155199 | orchestrator -> localhost | | +...oE +..| 2025-07-12 12:58:47.155221 | orchestrator -> localhost | | ..=+. o.oo| 2025-07-12 12:58:47.155241 | orchestrator -> localhost | | oo*oo.= .o| 2025-07-12 12:58:47.155262 | orchestrator -> localhost | | o*oo ..+.. | 2025-07-12 12:58:47.155283 | orchestrator -> localhost | +----[SHA256]-----+ 2025-07-12 12:58:47.155335 | orchestrator -> localhost | ok: Runtime: 0:00:00.698166 2025-07-12 12:58:47.162281 | 2025-07-12 12:58:47.162372 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-07-12 12:58:47.191542 | orchestrator | ok 2025-07-12 12:58:47.202520 | orchestrator | included: /var/lib/zuul/builds/40f5487bf9f54cd38fd17208779020e4/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-07-12 12:58:47.212976 | 2025-07-12 12:58:47.213074 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-07-12 12:58:47.236532 | orchestrator | skipping: Conditional result was False 2025-07-12 12:58:47.252921 | 2025-07-12 12:58:47.253075 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-07-12 12:58:47.816336 | orchestrator | changed 2025-07-12 12:58:47.825172 | 2025-07-12 12:58:47.825277 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-07-12 12:58:48.106794 | orchestrator | ok 2025-07-12 12:58:48.117809 | 2025-07-12 12:58:48.117955 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-07-12 12:58:48.678117 | orchestrator | ok 2025-07-12 12:58:48.690686 | 2025-07-12 12:58:48.690823 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-07-12 12:58:49.113669 | orchestrator | ok 2025-07-12 12:58:49.127492 | 2025-07-12 12:58:49.127616 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-07-12 12:58:49.140983 | orchestrator | skipping: Conditional result was False 2025-07-12 12:58:49.151462 | 2025-07-12 12:58:49.151586 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-07-12 12:58:49.530277 | orchestrator -> localhost | changed 2025-07-12 12:58:49.560047 | 2025-07-12 12:58:49.560216 | TASK [add-build-sshkey : Add back temp key] 2025-07-12 12:58:49.857484 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/40f5487bf9f54cd38fd17208779020e4/work/40f5487bf9f54cd38fd17208779020e4_id_rsa (zuul-build-sshkey) 2025-07-12 12:58:49.858072 | orchestrator -> localhost | ok: Runtime: 0:00:00.011826 2025-07-12 12:58:49.872551 | 2025-07-12 12:58:49.872691 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-07-12 12:58:50.265621 | orchestrator | ok 2025-07-12 12:58:50.274591 | 2025-07-12 12:58:50.274738 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-07-12 12:58:50.299065 | orchestrator | skipping: Conditional result was False 2025-07-12 12:58:50.360087 | 2025-07-12 12:58:50.360222 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-07-12 12:58:50.767897 | orchestrator | ok 2025-07-12 12:58:50.779766 | 2025-07-12 12:58:50.779889 | TASK [validate-host : Define zuul_info_dir fact] 2025-07-12 12:58:50.829407 | orchestrator | ok 2025-07-12 12:58:50.841411 | 2025-07-12 12:58:50.841639 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-07-12 12:58:51.143316 | orchestrator -> localhost | ok 2025-07-12 12:58:51.150822 | 2025-07-12 12:58:51.151079 | TASK [validate-host : Collect information about the host] 2025-07-12 12:58:52.406953 | orchestrator | ok 2025-07-12 12:58:52.421564 | 2025-07-12 12:58:52.421701 | TASK [validate-host : Sanitize hostname] 2025-07-12 12:58:52.472021 | orchestrator | ok 2025-07-12 12:58:52.478393 | 2025-07-12 12:58:52.478513 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-07-12 12:58:53.018768 | orchestrator -> localhost | changed 2025-07-12 12:58:53.025449 | 2025-07-12 12:58:53.025574 | TASK [validate-host : Collect information about zuul worker] 2025-07-12 12:58:53.443324 | orchestrator | ok 2025-07-12 12:58:53.450019 | 2025-07-12 12:58:53.450144 | TASK [validate-host : Write out all zuul information for each host] 2025-07-12 12:58:53.989795 | orchestrator -> localhost | changed 2025-07-12 12:58:54.000830 | 2025-07-12 12:58:54.000988 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-07-12 12:58:54.311800 | orchestrator | ok 2025-07-12 12:58:54.322296 | 2025-07-12 12:58:54.322442 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-07-12 12:59:35.194231 | orchestrator | changed: 2025-07-12 12:59:35.194462 | orchestrator | .d..t...... src/ 2025-07-12 12:59:35.194498 | orchestrator | .d..t...... src/github.com/ 2025-07-12 12:59:35.194523 | orchestrator | .d..t...... src/github.com/osism/ 2025-07-12 12:59:35.194546 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-07-12 12:59:35.194567 | orchestrator | RedHat.yml 2025-07-12 12:59:35.205620 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-07-12 12:59:35.205637 | orchestrator | RedHat.yml 2025-07-12 12:59:35.205690 | orchestrator | = 1.53.0"... 2025-07-12 12:59:47.982116 | orchestrator | 12:59:47.981 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-07-12 12:59:48.571894 | orchestrator | 12:59:48.571 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-07-12 12:59:49.134708 | orchestrator | 12:59:49.134 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-07-12 12:59:49.968495 | orchestrator | 12:59:49.968 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.0... 2025-07-12 12:59:51.264368 | orchestrator | 12:59:51.264 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.0 (signed, key ID 4F80527A391BEFD2) 2025-07-12 12:59:52.087406 | orchestrator | 12:59:52.087 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-07-12 12:59:52.947277 | orchestrator | 12:59:52.946 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-07-12 12:59:52.947367 | orchestrator | 12:59:52.947 STDOUT terraform: Providers are signed by their developers. 2025-07-12 12:59:52.947375 | orchestrator | 12:59:52.947 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-07-12 12:59:52.947380 | orchestrator | 12:59:52.947 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-07-12 12:59:52.947394 | orchestrator | 12:59:52.947 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-07-12 12:59:52.947407 | orchestrator | 12:59:52.947 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-07-12 12:59:52.947415 | orchestrator | 12:59:52.947 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-07-12 12:59:52.947420 | orchestrator | 12:59:52.947 STDOUT terraform: you run "tofu init" in the future. 2025-07-12 12:59:52.947426 | orchestrator | 12:59:52.947 STDOUT terraform: OpenTofu has been successfully initialized! 2025-07-12 12:59:52.947509 | orchestrator | 12:59:52.947 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-07-12 12:59:52.947600 | orchestrator | 12:59:52.947 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-07-12 12:59:52.947605 | orchestrator | 12:59:52.947 STDOUT terraform: should now work. 2025-07-12 12:59:52.947612 | orchestrator | 12:59:52.947 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-07-12 12:59:52.947683 | orchestrator | 12:59:52.947 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-07-12 12:59:52.947727 | orchestrator | 12:59:52.947 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-07-12 12:59:53.074940 | orchestrator | 12:59:53.074 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-07-12 12:59:53.075037 | orchestrator | 12:59:53.074 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-07-12 12:59:53.492497 | orchestrator | 12:59:53.492 STDOUT terraform: Created and switched to workspace "ci"! 2025-07-12 12:59:53.492572 | orchestrator | 12:59:53.492 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-07-12 12:59:53.492809 | orchestrator | 12:59:53.492 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-07-12 12:59:53.492908 | orchestrator | 12:59:53.492 STDOUT terraform: for this configuration. 2025-07-12 12:59:53.673505 | orchestrator | 12:59:53.673 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-07-12 12:59:53.673663 | orchestrator | 12:59:53.673 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-07-12 12:59:53.788073 | orchestrator | 12:59:53.787 STDOUT terraform: ci.auto.tfvars 2025-07-12 12:59:53.791746 | orchestrator | 12:59:53.791 STDOUT terraform: default_custom.tf 2025-07-12 12:59:53.928901 | orchestrator | 12:59:53.928 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-07-12 12:59:54.993719 | orchestrator | 12:59:54.993 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-07-12 12:59:55.496202 | orchestrator | 12:59:55.495 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-07-12 12:59:55.705745 | orchestrator | 12:59:55.705 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-07-12 12:59:55.705817 | orchestrator | 12:59:55.705 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-07-12 12:59:55.705911 | orchestrator | 12:59:55.705 STDOUT terraform:  + create 2025-07-12 12:59:55.705961 | orchestrator | 12:59:55.705 STDOUT terraform:  <= read (data resources) 2025-07-12 12:59:55.706035 | orchestrator | 12:59:55.705 STDOUT terraform: OpenTofu will perform the following actions: 2025-07-12 12:59:55.706709 | orchestrator | 12:59:55.706 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-07-12 12:59:55.706721 | orchestrator | 12:59:55.706 STDOUT terraform:  # (config refers to values not yet known) 2025-07-12 12:59:55.706727 | orchestrator | 12:59:55.706 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-07-12 12:59:55.706731 | orchestrator | 12:59:55.706 STDOUT terraform:  + checksum = (known after apply) 2025-07-12 12:59:55.706735 | orchestrator | 12:59:55.706 STDOUT terraform:  + created_at = (known after apply) 2025-07-12 12:59:55.706739 | orchestrator | 12:59:55.706 STDOUT terraform:  + file = (known after apply) 2025-07-12 12:59:55.706743 | orchestrator | 12:59:55.706 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.706747 | orchestrator | 12:59:55.706 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:55.706765 | orchestrator | 12:59:55.706 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-07-12 12:59:55.706769 | orchestrator | 12:59:55.706 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-07-12 12:59:55.706773 | orchestrator | 12:59:55.706 STDOUT terraform:  + most_recent = true 2025-07-12 12:59:55.706777 | orchestrator | 12:59:55.706 STDOUT terraform:  + name = (known after apply) 2025-07-12 12:59:55.706780 | orchestrator | 12:59:55.706 STDOUT terraform:  + protected = (known after apply) 2025-07-12 12:59:55.706784 | orchestrator | 12:59:55.706 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.706788 | orchestrator | 12:59:55.706 STDOUT terraform:  + schema = (known after apply) 2025-07-12 12:59:55.706792 | orchestrator | 12:59:55.706 STDOUT terraform:  + size_bytes = (known after apply) 2025-07-12 12:59:55.706796 | orchestrator | 12:59:55.706 STDOUT terraform:  + tags = (known after apply) 2025-07-12 12:59:55.706799 | orchestrator | 12:59:55.706 STDOUT terraform:  + updated_at = (known after apply) 2025-07-12 12:59:55.706803 | orchestrator | 12:59:55.706 STDOUT terraform:  } 2025-07-12 12:59:55.707434 | orchestrator | 12:59:55.706 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-07-12 12:59:55.707445 | orchestrator | 12:59:55.706 STDOUT terraform:  # (config refers to values not yet known) 2025-07-12 12:59:55.707449 | orchestrator | 12:59:55.707 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-07-12 12:59:55.707453 | orchestrator | 12:59:55.707 STDOUT terraform:  + checksum = (known after apply) 2025-07-12 12:59:55.707457 | orchestrator | 12:59:55.707 STDOUT terraform:  + created_at = (known after apply) 2025-07-12 12:59:55.707461 | orchestrator | 12:59:55.707 STDOUT terraform:  + file = (known after apply) 2025-07-12 12:59:55.707465 | orchestrator | 12:59:55.707 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.707468 | orchestrator | 12:59:55.707 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:55.707472 | orchestrator | 12:59:55.707 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-07-12 12:59:55.707476 | orchestrator | 12:59:55.707 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-07-12 12:59:55.707485 | orchestrator | 12:59:55.707 STDOUT terraform:  + most_recent = true 2025-07-12 12:59:55.707489 | orchestrator | 12:59:55.707 STDOUT terraform:  + name = (known after apply) 2025-07-12 12:59:55.707493 | orchestrator | 12:59:55.707 STDOUT terraform:  + protected = (known after apply) 2025-07-12 12:59:55.707497 | orchestrator | 12:59:55.707 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.707500 | orchestrator | 12:59:55.707 STDOUT terraform:  + schema = (known after apply) 2025-07-12 12:59:55.707504 | orchestrator | 12:59:55.707 STDOUT terraform:  + size_bytes = (known after apply) 2025-07-12 12:59:55.707508 | orchestrator | 12:59:55.707 STDOUT terraform:  + tags = (known after apply) 2025-07-12 12:59:55.707511 | orchestrator | 12:59:55.707 STDOUT terraform:  + updated_at = (known after apply) 2025-07-12 12:59:55.707515 | orchestrator | 12:59:55.707 STDOUT terraform:  } 2025-07-12 12:59:55.708040 | orchestrator | 12:59:55.707 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-07-12 12:59:55.708055 | orchestrator | 12:59:55.707 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-07-12 12:59:55.708060 | orchestrator | 12:59:55.707 STDOUT terraform:  + content = (known after apply) 2025-07-12 12:59:55.708064 | orchestrator | 12:59:55.707 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-12 12:59:55.708067 | orchestrator | 12:59:55.707 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-12 12:59:55.708071 | orchestrator | 12:59:55.707 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-12 12:59:55.708075 | orchestrator | 12:59:55.707 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-12 12:59:55.708078 | orchestrator | 12:59:55.707 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-12 12:59:55.708083 | orchestrator | 12:59:55.707 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-12 12:59:55.708086 | orchestrator | 12:59:55.707 STDOUT terraform:  + directory_permission = "0777" 2025-07-12 12:59:55.708090 | orchestrator | 12:59:55.707 STDOUT terraform:  + file_permission = "0644" 2025-07-12 12:59:55.708094 | orchestrator | 12:59:55.707 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-07-12 12:59:55.708098 | orchestrator | 12:59:55.707 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.708101 | orchestrator | 12:59:55.708 STDOUT terraform:  } 2025-07-12 12:59:55.708583 | orchestrator | 12:59:55.708 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-07-12 12:59:55.708590 | orchestrator | 12:59:55.708 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-07-12 12:59:55.708593 | orchestrator | 12:59:55.708 STDOUT terraform:  + content = (known after apply) 2025-07-12 12:59:55.708597 | orchestrator | 12:59:55.708 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-12 12:59:55.708601 | orchestrator | 12:59:55.708 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-12 12:59:55.708605 | orchestrator | 12:59:55.708 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-12 12:59:55.708608 | orchestrator | 12:59:55.708 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-12 12:59:55.708612 | orchestrator | 12:59:55.708 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-12 12:59:55.708616 | orchestrator | 12:59:55.708 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-12 12:59:55.708619 | orchestrator | 12:59:55.708 STDOUT terraform:  + directory_permission = "0777" 2025-07-12 12:59:55.708623 | orchestrator | 12:59:55.708 STDOUT terraform:  + file_permission = "0644" 2025-07-12 12:59:55.708627 | orchestrator | 12:59:55.708 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-07-12 12:59:55.708630 | orchestrator | 12:59:55.708 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.708664 | orchestrator | 12:59:55.708 STDOUT terraform:  } 2025-07-12 12:59:55.709160 | orchestrator | 12:59:55.708 STDOUT terraform:  # local_file.inventory will be created 2025-07-12 12:59:55.709168 | orchestrator | 12:59:55.708 STDOUT terraform:  + resource "local_file" "inventory" { 2025-07-12 12:59:55.709172 | orchestrator | 12:59:55.708 STDOUT terraform:  + content = (known after apply) 2025-07-12 12:59:55.709182 | orchestrator | 12:59:55.708 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-12 12:59:55.709186 | orchestrator | 12:59:55.708 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-12 12:59:55.709189 | orchestrator | 12:59:55.708 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-12 12:59:55.709193 | orchestrator | 12:59:55.708 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-12 12:59:55.709197 | orchestrator | 12:59:55.708 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-12 12:59:55.709201 | orchestrator | 12:59:55.709 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-12 12:59:55.709204 | orchestrator | 12:59:55.709 STDOUT terraform:  + directory_permission = "0777" 2025-07-12 12:59:55.709208 | orchestrator | 12:59:55.709 STDOUT terraform:  + file_permission = "0644" 2025-07-12 12:59:55.709212 | orchestrator | 12:59:55.709 STDOUT terraform:  + filename = "inventory.ci" 2025-07-12 12:59:55.709215 | orchestrator | 12:59:55.709 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.709219 | orchestrator | 12:59:55.709 STDOUT terraform:  } 2025-07-12 12:59:55.709745 | orchestrator | 12:59:55.709 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-07-12 12:59:55.709754 | orchestrator | 12:59:55.709 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-07-12 12:59:55.709759 | orchestrator | 12:59:55.709 STDOUT terraform:  + content = (sensitive value) 2025-07-12 12:59:55.709763 | orchestrator | 12:59:55.709 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-12 12:59:55.709766 | orchestrator | 12:59:55.709 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-12 12:59:55.709770 | orchestrator | 12:59:55.709 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-12 12:59:55.709774 | orchestrator | 12:59:55.709 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-12 12:59:55.709778 | orchestrator | 12:59:55.709 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-12 12:59:55.709781 | orchestrator | 12:59:55.709 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-12 12:59:55.709785 | orchestrator | 12:59:55.709 STDOUT terraform:  + directory_permission = "0700" 2025-07-12 12:59:55.709789 | orchestrator | 12:59:55.709 STDOUT terraform:  + file_permission = "0600" 2025-07-12 12:59:55.709792 | orchestrator | 12:59:55.709 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-07-12 12:59:55.709796 | orchestrator | 12:59:55.709 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.709800 | orchestrator | 12:59:55.709 STDOUT terraform:  } 2025-07-12 12:59:55.709945 | orchestrator | 12:59:55.709 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-07-12 12:59:55.709952 | orchestrator | 12:59:55.709 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-07-12 12:59:55.709955 | orchestrator | 12:59:55.709 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.709959 | orchestrator | 12:59:55.709 STDOUT terraform:  } 2025-07-12 12:59:55.710464 | orchestrator | 12:59:55.710 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-07-12 12:59:55.710482 | orchestrator | 12:59:55.710 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-07-12 12:59:55.710486 | orchestrator | 12:59:55.710 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 12:59:55.710490 | orchestrator | 12:59:55.710 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:55.710494 | orchestrator | 12:59:55.710 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.710497 | orchestrator | 12:59:55.710 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 12:59:55.710501 | orchestrator | 12:59:55.710 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:55.710505 | orchestrator | 12:59:55.710 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-07-12 12:59:55.710508 | orchestrator | 12:59:55.710 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.710512 | orchestrator | 12:59:55.710 STDOUT terraform:  + size = 80 2025-07-12 12:59:55.710516 | orchestrator | 12:59:55.710 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 12:59:55.710520 | orchestrator | 12:59:55.710 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 12:59:55.710523 | orchestrator | 12:59:55.710 STDOUT terraform:  } 2025-07-12 12:59:55.711033 | orchestrator | 12:59:55.710 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-07-12 12:59:55.711041 | orchestrator | 12:59:55.710 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-12 12:59:55.711045 | orchestrator | 12:59:55.710 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 12:59:55.711049 | orchestrator | 12:59:55.710 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:55.711053 | orchestrator | 12:59:55.710 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.711057 | orchestrator | 12:59:55.710 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 12:59:55.711061 | orchestrator | 12:59:55.710 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:55.711065 | orchestrator | 12:59:55.710 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-07-12 12:59:55.711069 | orchestrator | 12:59:55.710 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.711072 | orchestrator | 12:59:55.710 STDOUT terraform:  + size = 80 2025-07-12 12:59:55.711076 | orchestrator | 12:59:55.710 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 12:59:55.711080 | orchestrator | 12:59:55.710 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 12:59:55.711084 | orchestrator | 12:59:55.711 STDOUT terraform:  } 2025-07-12 12:59:55.711231 | orchestrator | 12:59:55.711 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-07-12 12:59:55.711288 | orchestrator | 12:59:55.711 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-12 12:59:55.711330 | orchestrator | 12:59:55.711 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 12:59:55.711372 | orchestrator | 12:59:55.711 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:55.711415 | orchestrator | 12:59:55.711 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.711460 | orchestrator | 12:59:55.711 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 12:59:55.711504 | orchestrator | 12:59:55.711 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:55.711556 | orchestrator | 12:59:55.711 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-07-12 12:59:55.711599 | orchestrator | 12:59:55.711 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.711630 | orchestrator | 12:59:55.711 STDOUT terraform:  + size = 80 2025-07-12 12:59:55.711675 | orchestrator | 12:59:55.711 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 12:59:55.711708 | orchestrator | 12:59:55.711 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 12:59:55.711731 | orchestrator | 12:59:55.711 STDOUT terraform:  } 2025-07-12 12:59:55.711881 | orchestrator | 12:59:55.711 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-07-12 12:59:55.711940 | orchestrator | 12:59:55.711 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-12 12:59:55.711983 | orchestrator | 12:59:55.711 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 12:59:55.712029 | orchestrator | 12:59:55.711 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:55.712074 | orchestrator | 12:59:55.712 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.712120 | orchestrator | 12:59:55.712 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 12:59:55.712176 | orchestrator | 12:59:55.712 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:55.712231 | orchestrator | 12:59:55.712 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-07-12 12:59:55.712277 | orchestrator | 12:59:55.712 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.712307 | orchestrator | 12:59:55.712 STDOUT terraform:  + size = 80 2025-07-12 12:59:55.712340 | orchestrator | 12:59:55.712 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 12:59:55.712375 | orchestrator | 12:59:55.712 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 12:59:55.712397 | orchestrator | 12:59:55.712 STDOUT terraform:  } 2025-07-12 12:59:55.712546 | orchestrator | 12:59:55.712 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-07-12 12:59:55.712603 | orchestrator | 12:59:55.712 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-12 12:59:55.712659 | orchestrator | 12:59:55.712 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 12:59:55.712695 | orchestrator | 12:59:55.712 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:55.712736 | orchestrator | 12:59:55.712 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.712781 | orchestrator | 12:59:55.712 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 12:59:55.712826 | orchestrator | 12:59:55.712 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:55.712886 | orchestrator | 12:59:55.712 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-07-12 12:59:55.712936 | orchestrator | 12:59:55.712 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.712963 | orchestrator | 12:59:55.712 STDOUT terraform:  + size = 80 2025-07-12 12:59:55.712998 | orchestrator | 12:59:55.712 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 12:59:55.713031 | orchestrator | 12:59:55.713 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 12:59:55.713051 | orchestrator | 12:59:55.713 STDOUT terraform:  } 2025-07-12 12:59:55.713194 | orchestrator | 12:59:55.713 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-07-12 12:59:55.713255 | orchestrator | 12:59:55.713 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-12 12:59:55.713300 | orchestrator | 12:59:55.713 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 12:59:55.713334 | orchestrator | 12:59:55.713 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:55.713376 | orchestrator | 12:59:55.713 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.713420 | orchestrator | 12:59:55.713 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 12:59:55.713461 | orchestrator | 12:59:55.713 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:55.713515 | orchestrator | 12:59:55.713 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-07-12 12:59:55.713557 | orchestrator | 12:59:55.713 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.713587 | orchestrator | 12:59:55.713 STDOUT terraform:  + size = 80 2025-07-12 12:59:55.713618 | orchestrator | 12:59:55.713 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 12:59:55.713663 | orchestrator | 12:59:55.713 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 12:59:55.713692 | orchestrator | 12:59:55.713 STDOUT terraform:  } 2025-07-12 12:59:55.714061 | orchestrator | 12:59:55.713 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-07-12 12:59:55.714122 | orchestrator | 12:59:55.714 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-12 12:59:55.714167 | orchestrator | 12:59:55.714 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 12:59:55.714198 | orchestrator | 12:59:55.714 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:55.714242 | orchestrator | 12:59:55.714 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.714283 | orchestrator | 12:59:55.714 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 12:59:55.714326 | orchestrator | 12:59:55.714 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:55.714376 | orchestrator | 12:59:55.714 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-07-12 12:59:55.714420 | orchestrator | 12:59:55.714 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.714448 | orchestrator | 12:59:55.714 STDOUT terraform:  + size = 80 2025-07-12 12:59:55.714488 | orchestrator | 12:59:55.714 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 12:59:55.714520 | orchestrator | 12:59:55.714 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 12:59:55.714542 | orchestrator | 12:59:55.714 STDOUT terraform:  } 2025-07-12 12:59:55.714605 | orchestrator | 12:59:55.714 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-07-12 12:59:55.714674 | orchestrator | 12:59:55.714 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 12:59:55.714722 | orchestrator | 12:59:55.714 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 12:59:55.714754 | orchestrator | 12:59:55.714 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:55.714798 | orchestrator | 12:59:55.714 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.714839 | orchestrator | 12:59:55.714 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:55.714886 | orchestrator | 12:59:55.714 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-07-12 12:59:55.714928 | orchestrator | 12:59:55.714 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.714959 | orchestrator | 12:59:55.714 STDOUT terraform:  + size = 20 2025-07-12 12:59:55.714991 | orchestrator | 12:59:55.714 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 12:59:55.715024 | orchestrator | 12:59:55.715 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 12:59:55.715044 | orchestrator | 12:59:55.715 STDOUT terraform:  } 2025-07-12 12:59:55.715094 | orchestrator | 12:59:55.715 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-07-12 12:59:55.715146 | orchestrator | 12:59:55.715 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 12:59:55.715189 | orchestrator | 12:59:55.715 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 12:59:55.715220 | orchestrator | 12:59:55.715 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:55.715263 | orchestrator | 12:59:55.715 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.715306 | orchestrator | 12:59:55.715 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:55.715354 | orchestrator | 12:59:55.715 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-07-12 12:59:55.715397 | orchestrator | 12:59:55.715 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.715426 | orchestrator | 12:59:55.715 STDOUT terraform:  + size = 20 2025-07-12 12:59:55.715458 | orchestrator | 12:59:55.715 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 12:59:55.715488 | orchestrator | 12:59:55.715 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 12:59:55.715508 | orchestrator | 12:59:55.715 STDOUT terraform:  } 2025-07-12 12:59:55.715559 | orchestrator | 12:59:55.715 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-07-12 12:59:55.715611 | orchestrator | 12:59:55.715 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 12:59:55.715668 | orchestrator | 12:59:55.715 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 12:59:55.715709 | orchestrator | 12:59:55.715 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:55.715752 | orchestrator | 12:59:55.715 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.715796 | orchestrator | 12:59:55.715 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:55.715841 | orchestrator | 12:59:55.715 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-07-12 12:59:55.715884 | orchestrator | 12:59:55.715 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.715911 | orchestrator | 12:59:55.715 STDOUT terraform:  + size = 20 2025-07-12 12:59:55.715944 | orchestrator | 12:59:55.715 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 12:59:55.715976 | orchestrator | 12:59:55.715 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 12:59:55.715996 | orchestrator | 12:59:55.715 STDOUT terraform:  } 2025-07-12 12:59:55.716059 | orchestrator | 12:59:55.716 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-07-12 12:59:55.716109 | orchestrator | 12:59:55.716 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 12:59:55.716153 | orchestrator | 12:59:55.716 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 12:59:55.716185 | orchestrator | 12:59:55.716 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:55.716228 | orchestrator | 12:59:55.716 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.716271 | orchestrator | 12:59:55.716 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:55.716316 | orchestrator | 12:59:55.716 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-07-12 12:59:55.716357 | orchestrator | 12:59:55.716 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.716384 | orchestrator | 12:59:55.716 STDOUT terraform:  + size = 20 2025-07-12 12:59:55.716416 | orchestrator | 12:59:55.716 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 12:59:55.716447 | orchestrator | 12:59:55.716 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 12:59:55.716468 | orchestrator | 12:59:55.716 STDOUT terraform:  } 2025-07-12 12:59:55.716529 | orchestrator | 12:59:55.716 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-07-12 12:59:55.716585 | orchestrator | 12:59:55.716 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 12:59:55.716629 | orchestrator | 12:59:55.716 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 12:59:55.716676 | orchestrator | 12:59:55.716 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:55.716721 | orchestrator | 12:59:55.716 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.716764 | orchestrator | 12:59:55.716 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:55.716809 | orchestrator | 12:59:55.716 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-07-12 12:59:55.716855 | orchestrator | 12:59:55.716 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.716889 | orchestrator | 12:59:55.716 STDOUT terraform:  + size = 20 2025-07-12 12:59:55.716923 | orchestrator | 12:59:55.716 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 12:59:55.716955 | orchestrator | 12:59:55.716 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 12:59:55.716977 | orchestrator | 12:59:55.716 STDOUT terraform:  } 2025-07-12 12:59:55.717029 | orchestrator | 12:59:55.716 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-07-12 12:59:55.717079 | orchestrator | 12:59:55.717 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 12:59:55.717122 | orchestrator | 12:59:55.717 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 12:59:55.717153 | orchestrator | 12:59:55.717 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:55.717204 | orchestrator | 12:59:55.717 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.717248 | orchestrator | 12:59:55.717 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:55.717294 | orchestrator | 12:59:55.717 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-07-12 12:59:55.717337 | orchestrator | 12:59:55.717 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.717368 | orchestrator | 12:59:55.717 STDOUT terraform:  + size = 20 2025-07-12 12:59:55.717401 | orchestrator | 12:59:55.717 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 12:59:55.717433 | orchestrator | 12:59:55.717 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 12:59:55.717456 | orchestrator | 12:59:55.717 STDOUT terraform:  } 2025-07-12 12:59:55.717508 | orchestrator | 12:59:55.717 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-07-12 12:59:55.717558 | orchestrator | 12:59:55.717 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 12:59:55.717601 | orchestrator | 12:59:55.717 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 12:59:55.717661 | orchestrator | 12:59:55.717 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:55.717709 | orchestrator | 12:59:55.717 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.717753 | orchestrator | 12:59:55.717 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:55.717799 | orchestrator | 12:59:55.717 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-07-12 12:59:55.717842 | orchestrator | 12:59:55.717 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.717871 | orchestrator | 12:59:55.717 STDOUT terraform:  + size = 20 2025-07-12 12:59:55.717904 | orchestrator | 12:59:55.717 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 12:59:55.717935 | orchestrator | 12:59:55.717 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 12:59:55.717958 | orchestrator | 12:59:55.717 STDOUT terraform:  } 2025-07-12 12:59:55.718010 | orchestrator | 12:59:55.717 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-07-12 12:59:55.718094 | orchestrator | 12:59:55.718 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 12:59:55.718144 | orchestrator | 12:59:55.718 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 12:59:55.718175 | orchestrator | 12:59:55.718 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:55.718218 | orchestrator | 12:59:55.718 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.718260 | orchestrator | 12:59:55.718 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:55.718307 | orchestrator | 12:59:55.718 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-07-12 12:59:55.718349 | orchestrator | 12:59:55.718 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.718379 | orchestrator | 12:59:55.718 STDOUT terraform:  + size = 20 2025-07-12 12:59:55.718411 | orchestrator | 12:59:55.718 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 12:59:55.718441 | orchestrator | 12:59:55.718 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 12:59:55.718462 | orchestrator | 12:59:55.718 STDOUT terraform:  } 2025-07-12 12:59:55.718512 | orchestrator | 12:59:55.718 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-07-12 12:59:55.718562 | orchestrator | 12:59:55.718 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 12:59:55.718608 | orchestrator | 12:59:55.718 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 12:59:55.718651 | orchestrator | 12:59:55.718 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:55.718695 | orchestrator | 12:59:55.718 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.718738 | orchestrator | 12:59:55.718 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:55.718786 | orchestrator | 12:59:55.718 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-07-12 12:59:55.718831 | orchestrator | 12:59:55.718 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.718859 | orchestrator | 12:59:55.718 STDOUT terraform:  + size = 20 2025-07-12 12:59:55.718891 | orchestrator | 12:59:55.718 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 12:59:55.718922 | orchestrator | 12:59:55.718 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 12:59:55.718943 | orchestrator | 12:59:55.718 STDOUT terraform:  } 2025-07-12 12:59:55.719859 | orchestrator | 12:59:55.719 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-07-12 12:59:55.719933 | orchestrator | 12:59:55.719 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-07-12 12:59:55.719978 | orchestrator | 12:59:55.719 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 12:59:55.720023 | orchestrator | 12:59:55.719 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 12:59:55.720066 | orchestrator | 12:59:55.720 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 12:59:55.720111 | orchestrator | 12:59:55.720 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:55.720147 | orchestrator | 12:59:55.720 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:55.720183 | orchestrator | 12:59:55.720 STDOUT terraform:  + config_drive = true 2025-07-12 12:59:55.720227 | orchestrator | 12:59:55.720 STDOUT terraform:  + created = (known after apply) 2025-07-12 12:59:55.720269 | orchestrator | 12:59:55.720 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 12:59:55.720306 | orchestrator | 12:59:55.720 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-07-12 12:59:55.720337 | orchestrator | 12:59:55.720 STDOUT terraform:  + force_delete = false 2025-07-12 12:59:55.720377 | orchestrator | 12:59:55.720 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 12:59:55.720419 | orchestrator | 12:59:55.720 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.720461 | orchestrator | 12:59:55.720 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 12:59:55.720505 | orchestrator | 12:59:55.720 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 12:59:55.720538 | orchestrator | 12:59:55.720 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 12:59:55.720576 | orchestrator | 12:59:55.720 STDOUT terraform:  + name = "testbed-manager" 2025-07-12 12:59:55.720610 | orchestrator | 12:59:55.720 STDOUT terraform:  + power_state = "active" 2025-07-12 12:59:55.720678 | orchestrator | 12:59:55.720 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.720719 | orchestrator | 12:59:55.720 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 12:59:55.720751 | orchestrator | 12:59:55.720 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 12:59:55.720795 | orchestrator | 12:59:55.720 STDOUT terraform:  + updated = (known after apply) 2025-07-12 12:59:55.720834 | orchestrator | 12:59:55.720 STDOUT terraform:  + user_data = (sensitive value) 2025-07-12 12:59:55.720859 | orchestrator | 12:59:55.720 STDOUT terraform:  + block_device { 2025-07-12 12:59:55.720891 | orchestrator | 12:59:55.720 STDOUT terraform:  + boot_index = 0 2025-07-12 12:59:55.720927 | orchestrator | 12:59:55.720 STDOUT terraform:  + delete_on_termination = false 2025-07-12 12:59:55.720963 | orchestrator | 12:59:55.720 STDOUT terraform:  + destination_type = "volume" 2025-07-12 12:59:55.720998 | orchestrator | 12:59:55.720 STDOUT terraform:  + multiattach = false 2025-07-12 12:59:55.721034 | orchestrator | 12:59:55.721 STDOUT terraform:  + source_type = "volume" 2025-07-12 12:59:55.721078 | orchestrator | 12:59:55.721 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 12:59:55.721101 | orchestrator | 12:59:55.721 STDOUT terraform:  } 2025-07-12 12:59:55.721123 | orchestrator | 12:59:55.721 STDOUT terraform:  + network { 2025-07-12 12:59:55.721150 | orchestrator | 12:59:55.721 STDOUT terraform:  + access_network = false 2025-07-12 12:59:55.721188 | orchestrator | 12:59:55.721 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 12:59:55.721227 | orchestrator | 12:59:55.721 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 12:59:55.721267 | orchestrator | 12:59:55.721 STDOUT terraform:  + mac = (known after apply) 2025-07-12 12:59:55.721310 | orchestrator | 12:59:55.721 STDOUT terraform:  + name = (known after apply) 2025-07-12 12:59:55.721347 | orchestrator | 12:59:55.721 STDOUT terraform:  + port = (known after apply) 2025-07-12 12:59:55.721384 | orchestrator | 12:59:55.721 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 12:59:55.721405 | orchestrator | 12:59:55.721 STDOUT terraform:  } 2025-07-12 12:59:55.721426 | orchestrator | 12:59:55.721 STDOUT terraform:  } 2025-07-12 12:59:55.721475 | orchestrator | 12:59:55.721 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-07-12 12:59:55.721522 | orchestrator | 12:59:55.721 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-12 12:59:55.721565 | orchestrator | 12:59:55.721 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 12:59:55.721609 | orchestrator | 12:59:55.721 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 12:59:55.721662 | orchestrator | 12:59:55.721 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 12:59:55.721711 | orchestrator | 12:59:55.721 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:55.721741 | orchestrator | 12:59:55.721 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:55.721770 | orchestrator | 12:59:55.721 STDOUT terraform:  + config_drive = true 2025-07-12 12:59:55.721813 | orchestrator | 12:59:55.721 STDOUT terraform:  + created = (known after apply) 2025-07-12 12:59:55.721855 | orchestrator | 12:59:55.721 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 12:59:55.721891 | orchestrator | 12:59:55.721 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-12 12:59:55.721922 | orchestrator | 12:59:55.721 STDOUT terraform:  + force_delete = false 2025-07-12 12:59:55.721962 | orchestrator | 12:59:55.721 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 12:59:55.722004 | orchestrator | 12:59:55.721 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.722075 | orchestrator | 12:59:55.722 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 12:59:55.722119 | orchestrator | 12:59:55.722 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 12:59:55.722151 | orchestrator | 12:59:55.722 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 12:59:55.722189 | orchestrator | 12:59:55.722 STDOUT terraform:  + name = "testbed-node-0" 2025-07-12 12:59:55.722221 | orchestrator | 12:59:55.722 STDOUT terraform:  + power_state = "active" 2025-07-12 12:59:55.722262 | orchestrator | 12:59:55.722 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.722303 | orchestrator | 12:59:55.722 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 12:59:55.722332 | orchestrator | 12:59:55.722 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 12:59:55.722373 | orchestrator | 12:59:55.722 STDOUT terraform:  + updated = (known after apply) 2025-07-12 12:59:55.722430 | orchestrator | 12:59:55.722 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-12 12:59:55.722454 | orchestrator | 12:59:55.722 STDOUT terraform:  + block_device { 2025-07-12 12:59:55.722490 | orchestrator | 12:59:55.722 STDOUT terraform:  + boot_index = 0 2025-07-12 12:59:55.722526 | orchestrator | 12:59:55.722 STDOUT terraform:  + delete_on_termination = false 2025-07-12 12:59:55.722561 | orchestrator | 12:59:55.722 STDOUT terraform:  + destination_type = "volume" 2025-07-12 12:59:55.722597 | orchestrator | 12:59:55.722 STDOUT terraform:  + multiattach = false 2025-07-12 12:59:55.722645 | orchestrator | 12:59:55.722 STDOUT terraform:  + source_type = "volume" 2025-07-12 12:59:55.722691 | orchestrator | 12:59:55.722 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 12:59:55.722712 | orchestrator | 12:59:55.722 STDOUT terraform:  } 2025-07-12 12:59:55.722732 | orchestrator | 12:59:55.722 STDOUT terraform:  + network { 2025-07-12 12:59:55.722758 | orchestrator | 12:59:55.722 STDOUT terraform:  + access_network = false 2025-07-12 12:59:55.722795 | orchestrator | 12:59:55.722 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 12:59:55.722832 | orchestrator | 12:59:55.722 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 12:59:55.722871 | orchestrator | 12:59:55.722 STDOUT terraform:  + mac = (known after apply) 2025-07-12 12:59:55.722908 | orchestrator | 12:59:55.722 STDOUT terraform:  + name = (known after apply) 2025-07-12 12:59:55.722947 | orchestrator | 12:59:55.722 STDOUT terraform:  + port = (known after apply) 2025-07-12 12:59:55.722984 | orchestrator | 12:59:55.722 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 12:59:55.723004 | orchestrator | 12:59:55.722 STDOUT terraform:  } 2025-07-12 12:59:55.723023 | orchestrator | 12:59:55.723 STDOUT terraform:  } 2025-07-12 12:59:55.723071 | orchestrator | 12:59:55.723 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-07-12 12:59:55.723118 | orchestrator | 12:59:55.723 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-12 12:59:55.723158 | orchestrator | 12:59:55.723 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 12:59:55.723201 | orchestrator | 12:59:55.723 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 12:59:55.723241 | orchestrator | 12:59:55.723 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 12:59:55.723284 | orchestrator | 12:59:55.723 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:55.723315 | orchestrator | 12:59:55.723 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:55.723341 | orchestrator | 12:59:55.723 STDOUT terraform:  + config_drive = true 2025-07-12 12:59:55.723381 | orchestrator | 12:59:55.723 STDOUT terraform:  + created = (known after apply) 2025-07-12 12:59:55.723422 | orchestrator | 12:59:55.723 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 12:59:55.723457 | orchestrator | 12:59:55.723 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-12 12:59:55.723486 | orchestrator | 12:59:55.723 STDOUT terraform:  + force_delete = false 2025-07-12 12:59:55.723528 | orchestrator | 12:59:55.723 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 12:59:55.723575 | orchestrator | 12:59:55.723 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.723616 | orchestrator | 12:59:55.723 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 12:59:55.723671 | orchestrator | 12:59:55.723 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 12:59:55.723704 | orchestrator | 12:59:55.723 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 12:59:55.723740 | orchestrator | 12:59:55.723 STDOUT terraform:  + name = "testbed-node-1" 2025-07-12 12:59:55.723772 | orchestrator | 12:59:55.723 STDOUT terraform:  + power_state = "active" 2025-07-12 12:59:55.723813 | orchestrator | 12:59:55.723 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.723854 | orchestrator | 12:59:55.723 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 12:59:55.723884 | orchestrator | 12:59:55.723 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 12:59:55.723924 | orchestrator | 12:59:55.723 STDOUT terraform:  + updated = (known after apply) 2025-07-12 12:59:55.723980 | orchestrator | 12:59:55.723 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-12 12:59:55.724002 | orchestrator | 12:59:55.723 STDOUT terraform:  + block_device { 2025-07-12 12:59:55.724032 | orchestrator | 12:59:55.724 STDOUT terraform:  + boot_index = 0 2025-07-12 12:59:55.724069 | orchestrator | 12:59:55.724 STDOUT terraform:  + delete_on_termination = false 2025-07-12 12:59:55.724105 | orchestrator | 12:59:55.724 STDOUT terraform:  + destination_type = "volume" 2025-07-12 12:59:55.724138 | orchestrator | 12:59:55.724 STDOUT terraform:  + multiattach = false 2025-07-12 12:59:55.724173 | orchestrator | 12:59:55.724 STDOUT terraform:  + source_type = "volume" 2025-07-12 12:59:55.724217 | orchestrator | 12:59:55.724 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 12:59:55.724237 | orchestrator | 12:59:55.724 STDOUT terraform:  } 2025-07-12 12:59:55.724257 | orchestrator | 12:59:55.724 STDOUT terraform:  + network { 2025-07-12 12:59:55.724283 | orchestrator | 12:59:55.724 STDOUT terraform:  + access_network = false 2025-07-12 12:59:55.724319 | orchestrator | 12:59:55.724 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 12:59:55.724355 | orchestrator | 12:59:55.724 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 12:59:55.724397 | orchestrator | 12:59:55.724 STDOUT terraform:  + mac = (known after apply) 2025-07-12 12:59:55.724435 | orchestrator | 12:59:55.724 STDOUT terraform:  + name = (known after apply) 2025-07-12 12:59:55.724472 | orchestrator | 12:59:55.724 STDOUT terraform:  + port = (known after apply) 2025-07-12 12:59:55.724509 | orchestrator | 12:59:55.724 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 12:59:55.724529 | orchestrator | 12:59:55.724 STDOUT terraform:  } 2025-07-12 12:59:55.724549 | orchestrator | 12:59:55.724 STDOUT terraform:  } 2025-07-12 12:59:55.724597 | orchestrator | 12:59:55.724 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-07-12 12:59:55.724670 | orchestrator | 12:59:55.724 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-12 12:59:55.724717 | orchestrator | 12:59:55.724 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 12:59:55.724759 | orchestrator | 12:59:55.724 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 12:59:55.724800 | orchestrator | 12:59:55.724 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 12:59:55.724842 | orchestrator | 12:59:55.724 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:55.724873 | orchestrator | 12:59:55.724 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:55.724900 | orchestrator | 12:59:55.724 STDOUT terraform:  + config_drive = true 2025-07-12 12:59:55.724940 | orchestrator | 12:59:55.724 STDOUT terraform:  + created = (known after apply) 2025-07-12 12:59:55.724980 | orchestrator | 12:59:55.724 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 12:59:55.725015 | orchestrator | 12:59:55.724 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-12 12:59:55.725045 | orchestrator | 12:59:55.725 STDOUT terraform:  + force_delete = false 2025-07-12 12:59:55.725085 | orchestrator | 12:59:55.725 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 12:59:55.725126 | orchestrator | 12:59:55.725 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.725166 | orchestrator | 12:59:55.725 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 12:59:55.725208 | orchestrator | 12:59:55.725 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 12:59:55.725238 | orchestrator | 12:59:55.725 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 12:59:55.725275 | orchestrator | 12:59:55.725 STDOUT terraform:  + name = "testbed-node-2" 2025-07-12 12:59:55.725305 | orchestrator | 12:59:55.725 STDOUT terraform:  + power_state = "active" 2025-07-12 12:59:55.725346 | orchestrator | 12:59:55.725 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.725386 | orchestrator | 12:59:55.725 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 12:59:55.725414 | orchestrator | 12:59:55.725 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 12:59:55.725454 | orchestrator | 12:59:55.725 STDOUT terraform:  + updated = (known after apply) 2025-07-12 12:59:55.725508 | orchestrator | 12:59:55.725 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-12 12:59:55.725531 | orchestrator | 12:59:55.725 STDOUT terraform:  + block_device { 2025-07-12 12:59:55.725561 | orchestrator | 12:59:55.725 STDOUT terraform:  + boot_index = 0 2025-07-12 12:59:55.725595 | orchestrator | 12:59:55.725 STDOUT terraform:  + delete_on_termination = false 2025-07-12 12:59:55.725652 | orchestrator | 12:59:55.725 STDOUT terraform:  + destination_type = "volume" 2025-07-12 12:59:55.725693 | orchestrator | 12:59:55.725 STDOUT terraform:  + multiattach = false 2025-07-12 12:59:55.725729 | orchestrator | 12:59:55.725 STDOUT terraform:  + source_type = "volume" 2025-07-12 12:59:55.725774 | orchestrator | 12:59:55.725 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 12:59:55.725801 | orchestrator | 12:59:55.725 STDOUT terraform:  } 2025-07-12 12:59:55.725822 | orchestrator | 12:59:55.725 STDOUT terraform:  + network { 2025-07-12 12:59:55.725849 | orchestrator | 12:59:55.725 STDOUT terraform:  + access_network = false 2025-07-12 12:59:55.725885 | orchestrator | 12:59:55.725 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 12:59:55.725921 | orchestrator | 12:59:55.725 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 12:59:55.725959 | orchestrator | 12:59:55.725 STDOUT terraform:  + mac = (known after apply) 2025-07-12 12:59:55.725997 | orchestrator | 12:59:55.725 STDOUT terraform:  + name = (known after apply) 2025-07-12 12:59:55.726051 | orchestrator | 12:59:55.726 STDOUT terraform:  + port = (known after apply) 2025-07-12 12:59:55.726091 | orchestrator | 12:59:55.726 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 12:59:55.726111 | orchestrator | 12:59:55.726 STDOUT terraform:  } 2025-07-12 12:59:55.726132 | orchestrator | 12:59:55.726 STDOUT terraform:  } 2025-07-12 12:59:55.726180 | orchestrator | 12:59:55.726 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-07-12 12:59:55.726227 | orchestrator | 12:59:55.726 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-12 12:59:55.726269 | orchestrator | 12:59:55.726 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 12:59:55.726311 | orchestrator | 12:59:55.726 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 12:59:55.726351 | orchestrator | 12:59:55.726 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 12:59:55.726394 | orchestrator | 12:59:55.726 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:55.726424 | orchestrator | 12:59:55.726 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:55.726450 | orchestrator | 12:59:55.726 STDOUT terraform:  + config_drive = true 2025-07-12 12:59:55.726491 | orchestrator | 12:59:55.726 STDOUT terraform:  + created = (known after apply) 2025-07-12 12:59:55.726531 | orchestrator | 12:59:55.726 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 12:59:55.726565 | orchestrator | 12:59:55.726 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-12 12:59:55.726597 | orchestrator | 12:59:55.726 STDOUT terraform:  + force_delete = false 2025-07-12 12:59:55.726668 | orchestrator | 12:59:55.726 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 12:59:55.726759 | orchestrator | 12:59:55.726 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.726875 | orchestrator | 12:59:55.726 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 12:59:55.727009 | orchestrator | 12:59:55.726 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 12:59:55.727117 | orchestrator | 12:59:55.727 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 12:59:55.727241 | orchestrator | 12:59:55.727 STDOUT terraform:  + name = "testbed-node-3" 2025-07-12 12:59:55.727347 | orchestrator | 12:59:55.727 STDOUT terraform:  + power_state = "active" 2025-07-12 12:59:55.727512 | orchestrator | 12:59:55.727 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.727674 | orchestrator | 12:59:55.727 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 12:59:55.727737 | orchestrator | 12:59:55.727 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 12:59:55.727804 | orchestrator | 12:59:55.727 STDOUT terraform:  + updated = (known after apply) 2025-07-12 12:59:55.727877 | orchestrator | 12:59:55.727 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-12 12:59:55.727905 | orchestrator | 12:59:55.727 STDOUT terraform:  + block_device { 2025-07-12 12:59:55.727938 | orchestrator | 12:59:55.727 STDOUT terraform:  + boot_index = 0 2025-07-12 12:59:55.727975 | orchestrator | 12:59:55.727 STDOUT terraform:  + delete_on_termination = false 2025-07-12 12:59:55.728011 | orchestrator | 12:59:55.727 STDOUT terraform:  + destination_type = "volume" 2025-07-12 12:59:55.728045 | orchestrator | 12:59:55.728 STDOUT terraform:  + multiattach = false 2025-07-12 12:59:55.728081 | orchestrator | 12:59:55.728 STDOUT terraform:  + source_type = "volume" 2025-07-12 12:59:55.728125 | orchestrator | 12:59:55.728 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 12:59:55.728147 | orchestrator | 12:59:55.728 STDOUT terraform:  } 2025-07-12 12:59:55.728168 | orchestrator | 12:59:55.728 STDOUT terraform:  + network { 2025-07-12 12:59:55.728195 | orchestrator | 12:59:55.728 STDOUT terraform:  + access_network = false 2025-07-12 12:59:55.728232 | orchestrator | 12:59:55.728 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 12:59:55.728269 | orchestrator | 12:59:55.728 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 12:59:55.728308 | orchestrator | 12:59:55.728 STDOUT terraform:  + mac = (known after apply) 2025-07-12 12:59:55.728345 | orchestrator | 12:59:55.728 STDOUT terraform:  + name = (known after apply) 2025-07-12 12:59:55.728383 | orchestrator | 12:59:55.728 STDOUT terraform:  + port = (known after apply) 2025-07-12 12:59:55.728421 | orchestrator | 12:59:55.728 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 12:59:55.728441 | orchestrator | 12:59:55.728 STDOUT terraform:  } 2025-07-12 12:59:55.728462 | orchestrator | 12:59:55.728 STDOUT terraform:  } 2025-07-12 12:59:55.728511 | orchestrator | 12:59:55.728 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-07-12 12:59:55.728559 | orchestrator | 12:59:55.728 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-12 12:59:55.728600 | orchestrator | 12:59:55.728 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 12:59:55.728675 | orchestrator | 12:59:55.728 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 12:59:55.728718 | orchestrator | 12:59:55.728 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 12:59:55.728761 | orchestrator | 12:59:55.728 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:55.728792 | orchestrator | 12:59:55.728 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:55.728827 | orchestrator | 12:59:55.728 STDOUT terraform:  + config_drive = true 2025-07-12 12:59:55.728874 | orchestrator | 12:59:55.728 STDOUT terraform:  + created = (known after apply) 2025-07-12 12:59:55.728915 | orchestrator | 12:59:55.728 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 12:59:55.728951 | orchestrator | 12:59:55.728 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-12 12:59:55.728983 | orchestrator | 12:59:55.728 STDOUT terraform:  + force_delete = false 2025-07-12 12:59:55.729026 | orchestrator | 12:59:55.728 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 12:59:55.729105 | orchestrator | 12:59:55.729 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.729148 | orchestrator | 12:59:55.729 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 12:59:55.729195 | orchestrator | 12:59:55.729 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 12:59:55.729229 | orchestrator | 12:59:55.729 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 12:59:55.729267 | orchestrator | 12:59:55.729 STDOUT terraform:  + name = "testbed-node-4" 2025-07-12 12:59:55.729298 | orchestrator | 12:59:55.729 STDOUT terraform:  + power_state = "active" 2025-07-12 12:59:55.729339 | orchestrator | 12:59:55.729 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.729379 | orchestrator | 12:59:55.729 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 12:59:55.729408 | orchestrator | 12:59:55.729 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 12:59:55.729449 | orchestrator | 12:59:55.729 STDOUT terraform:  + updated = (known after apply) 2025-07-12 12:59:55.729505 | orchestrator | 12:59:55.729 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-12 12:59:55.729528 | orchestrator | 12:59:55.729 STDOUT terraform:  + block_device { 2025-07-12 12:59:55.729558 | orchestrator | 12:59:55.729 STDOUT terraform:  + boot_index = 0 2025-07-12 12:59:55.729594 | orchestrator | 12:59:55.729 STDOUT terraform:  + delete_on_termination = false 2025-07-12 12:59:55.729630 | orchestrator | 12:59:55.729 STDOUT terraform:  + destination_type = "volume" 2025-07-12 12:59:55.729678 | orchestrator | 12:59:55.729 STDOUT terraform:  + multiattach = false 2025-07-12 12:59:55.729714 | orchestrator | 12:59:55.729 STDOUT terraform:  + source_type = "volume" 2025-07-12 12:59:55.729758 | orchestrator | 12:59:55.729 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 12:59:55.729778 | orchestrator | 12:59:55.729 STDOUT terraform:  } 2025-07-12 12:59:55.729799 | orchestrator | 12:59:55.729 STDOUT terraform:  + network { 2025-07-12 12:59:55.729826 | orchestrator | 12:59:55.729 STDOUT terraform:  + access_network = false 2025-07-12 12:59:55.729864 | orchestrator | 12:59:55.729 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 12:59:55.729900 | orchestrator | 12:59:55.729 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 12:59:55.729937 | orchestrator | 12:59:55.729 STDOUT terraform:  + mac = (known after apply) 2025-07-12 12:59:55.729981 | orchestrator | 12:59:55.729 STDOUT terraform:  + name = (known after apply) 2025-07-12 12:59:55.730036 | orchestrator | 12:59:55.729 STDOUT terraform:  + port = (known after apply) 2025-07-12 12:59:55.730077 | orchestrator | 12:59:55.730 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 12:59:55.730098 | orchestrator | 12:59:55.730 STDOUT terraform:  } 2025-07-12 12:59:55.730118 | orchestrator | 12:59:55.730 STDOUT terraform:  } 2025-07-12 12:59:55.730166 | orchestrator | 12:59:55.730 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-07-12 12:59:55.730219 | orchestrator | 12:59:55.730 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-12 12:59:55.730261 | orchestrator | 12:59:55.730 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 12:59:55.730301 | orchestrator | 12:59:55.730 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 12:59:55.730343 | orchestrator | 12:59:55.730 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 12:59:55.730385 | orchestrator | 12:59:55.730 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:55.730415 | orchestrator | 12:59:55.730 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:55.730441 | orchestrator | 12:59:55.730 STDOUT terraform:  + config_drive = true 2025-07-12 12:59:55.730481 | orchestrator | 12:59:55.730 STDOUT terraform:  + created = (known after apply) 2025-07-12 12:59:55.730522 | orchestrator | 12:59:55.730 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 12:59:55.730556 | orchestrator | 12:59:55.730 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-12 12:59:55.730585 | orchestrator | 12:59:55.730 STDOUT terraform:  + force_delete = false 2025-07-12 12:59:55.730625 | orchestrator | 12:59:55.730 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 12:59:55.730682 | orchestrator | 12:59:55.730 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.730724 | orchestrator | 12:59:55.730 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 12:59:55.730766 | orchestrator | 12:59:55.730 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 12:59:55.730797 | orchestrator | 12:59:55.730 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 12:59:55.730834 | orchestrator | 12:59:55.730 STDOUT terraform:  + name = "testbed-node-5" 2025-07-12 12:59:55.730865 | orchestrator | 12:59:55.730 STDOUT terraform:  + power_state = "active" 2025-07-12 12:59:55.730906 | orchestrator | 12:59:55.730 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.730946 | orchestrator | 12:59:55.730 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 12:59:55.730977 | orchestrator | 12:59:55.730 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 12:59:55.731018 | orchestrator | 12:59:55.730 STDOUT terraform:  + updated = (known after apply) 2025-07-12 12:59:55.731073 | orchestrator | 12:59:55.731 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-12 12:59:55.731095 | orchestrator | 12:59:55.731 STDOUT terraform:  + block_device { 2025-07-12 12:59:55.731145 | orchestrator | 12:59:55.731 STDOUT terraform:  + boot_index = 0 2025-07-12 12:59:55.731184 | orchestrator | 12:59:55.731 STDOUT terraform:  + delete_on_termination = false 2025-07-12 12:59:55.731220 | orchestrator | 12:59:55.731 STDOUT terraform:  + destination_type = "volume" 2025-07-12 12:59:55.731254 | orchestrator | 12:59:55.731 STDOUT terraform:  + multiattach = false 2025-07-12 12:59:55.731291 | orchestrator | 12:59:55.731 STDOUT terraform:  + source_type = "volume" 2025-07-12 12:59:55.731335 | orchestrator | 12:59:55.731 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 12:59:55.731357 | orchestrator | 12:59:55.731 STDOUT terraform:  } 2025-07-12 12:59:55.731378 | orchestrator | 12:59:55.731 STDOUT terraform:  + network { 2025-07-12 12:59:55.731405 | orchestrator | 12:59:55.731 STDOUT terraform:  + access_network = false 2025-07-12 12:59:55.731443 | orchestrator | 12:59:55.731 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 12:59:55.731481 | orchestrator | 12:59:55.731 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 12:59:55.731520 | orchestrator | 12:59:55.731 STDOUT terraform:  + mac = (known after apply) 2025-07-12 12:59:55.731558 | orchestrator | 12:59:55.731 STDOUT terraform:  + name = (known after apply) 2025-07-12 12:59:55.731599 | orchestrator | 12:59:55.731 STDOUT terraform:  + port = (known after apply) 2025-07-12 12:59:55.731649 | orchestrator | 12:59:55.731 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 12:59:55.731670 | orchestrator | 12:59:55.731 STDOUT terraform:  } 2025-07-12 12:59:55.731691 | orchestrator | 12:59:55.731 STDOUT terraform:  } 2025-07-12 12:59:55.731732 | orchestrator | 12:59:55.731 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-07-12 12:59:55.731773 | orchestrator | 12:59:55.731 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-07-12 12:59:55.731808 | orchestrator | 12:59:55.731 STDOUT terraform:  + fingerprint = (known after apply) 2025-07-12 12:59:55.731843 | orchestrator | 12:59:55.731 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.731871 | orchestrator | 12:59:55.731 STDOUT terraform:  + name = "testbed" 2025-07-12 12:59:55.731903 | orchestrator | 12:59:55.731 STDOUT terraform:  + private_key = (sensitive value) 2025-07-12 12:59:55.731938 | orchestrator | 12:59:55.731 STDOUT terraform:  + public_key = (known after apply) 2025-07-12 12:59:55.731974 | orchestrator | 12:59:55.731 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.732011 | orchestrator | 12:59:55.731 STDOUT terraform:  + user_id = (known after apply) 2025-07-12 12:59:55.732032 | orchestrator | 12:59:55.732 STDOUT terraform:  } 2025-07-12 12:59:55.732087 | orchestrator | 12:59:55.732 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-07-12 12:59:55.732142 | orchestrator | 12:59:55.732 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 12:59:55.732177 | orchestrator | 12:59:55.732 STDOUT terraform:  + device = (known after apply) 2025-07-12 12:59:55.732211 | orchestrator | 12:59:55.732 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.732251 | orchestrator | 12:59:55.732 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 12:59:55.732286 | orchestrator | 12:59:55.732 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.732321 | orchestrator | 12:59:55.732 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 12:59:55.732342 | orchestrator | 12:59:55.732 STDOUT terraform:  } 2025-07-12 12:59:55.732397 | orchestrator | 12:59:55.732 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-07-12 12:59:55.732451 | orchestrator | 12:59:55.732 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 12:59:55.732487 | orchestrator | 12:59:55.732 STDOUT terraform:  + device = (known after apply) 2025-07-12 12:59:55.732523 | orchestrator | 12:59:55.732 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.732558 | orchestrator | 12:59:55.732 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 12:59:55.732592 | orchestrator | 12:59:55.732 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.732628 | orchestrator | 12:59:55.732 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 12:59:55.732675 | orchestrator | 12:59:55.732 STDOUT terraform:  } 2025-07-12 12:59:55.732732 | orchestrator | 12:59:55.732 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-07-12 12:59:55.732785 | orchestrator | 12:59:55.732 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 12:59:55.732819 | orchestrator | 12:59:55.732 STDOUT terraform:  + device = (known after apply) 2025-07-12 12:59:55.732854 | orchestrator | 12:59:55.732 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.732888 | orchestrator | 12:59:55.732 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 12:59:55.732922 | orchestrator | 12:59:55.732 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.732956 | orchestrator | 12:59:55.732 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 12:59:55.732975 | orchestrator | 12:59:55.732 STDOUT terraform:  } 2025-07-12 12:59:55.733030 | orchestrator | 12:59:55.732 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-07-12 12:59:55.733083 | orchestrator | 12:59:55.733 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 12:59:55.733118 | orchestrator | 12:59:55.733 STDOUT terraform:  + device = (known after apply) 2025-07-12 12:59:55.733153 | orchestrator | 12:59:55.733 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.733189 | orchestrator | 12:59:55.733 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 12:59:55.733223 | orchestrator | 12:59:55.733 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.733256 | orchestrator | 12:59:55.733 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 12:59:55.733275 | orchestrator | 12:59:55.733 STDOUT terraform:  } 2025-07-12 12:59:55.733330 | orchestrator | 12:59:55.733 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-07-12 12:59:55.733393 | orchestrator | 12:59:55.733 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 12:59:55.733427 | orchestrator | 12:59:55.733 STDOUT terraform:  + device = (known after apply) 2025-07-12 12:59:55.733462 | orchestrator | 12:59:55.733 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.733496 | orchestrator | 12:59:55.733 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 12:59:55.733531 | orchestrator | 12:59:55.733 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.733566 | orchestrator | 12:59:55.733 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 12:59:55.733586 | orchestrator | 12:59:55.733 STDOUT terraform:  } 2025-07-12 12:59:55.733653 | orchestrator | 12:59:55.733 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-07-12 12:59:55.733709 | orchestrator | 12:59:55.733 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 12:59:55.733743 | orchestrator | 12:59:55.733 STDOUT terraform:  + device = (known after apply) 2025-07-12 12:59:55.733779 | orchestrator | 12:59:55.733 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.733813 | orchestrator | 12:59:55.733 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 12:59:55.733847 | orchestrator | 12:59:55.733 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.733882 | orchestrator | 12:59:55.733 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 12:59:55.733904 | orchestrator | 12:59:55.733 STDOUT terraform:  } 2025-07-12 12:59:55.733958 | orchestrator | 12:59:55.733 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-07-12 12:59:55.734028 | orchestrator | 12:59:55.733 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 12:59:55.734068 | orchestrator | 12:59:55.734 STDOUT terraform:  + device = (known after apply) 2025-07-12 12:59:55.734103 | orchestrator | 12:59:55.734 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.734139 | orchestrator | 12:59:55.734 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 12:59:55.734175 | orchestrator | 12:59:55.734 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.734209 | orchestrator | 12:59:55.734 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 12:59:55.734230 | orchestrator | 12:59:55.734 STDOUT terraform:  } 2025-07-12 12:59:55.734286 | orchestrator | 12:59:55.734 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-07-12 12:59:55.734339 | orchestrator | 12:59:55.734 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 12:59:55.734379 | orchestrator | 12:59:55.734 STDOUT terraform:  + device = (known after apply) 2025-07-12 12:59:55.734415 | orchestrator | 12:59:55.734 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.734449 | orchestrator | 12:59:55.734 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 12:59:55.734484 | orchestrator | 12:59:55.734 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.734522 | orchestrator | 12:59:55.734 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 12:59:55.734544 | orchestrator | 12:59:55.734 STDOUT terraform:  } 2025-07-12 12:59:55.734599 | orchestrator | 12:59:55.734 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-07-12 12:59:55.734667 | orchestrator | 12:59:55.734 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 12:59:55.734706 | orchestrator | 12:59:55.734 STDOUT terraform:  + device = (known after apply) 2025-07-12 12:59:55.734743 | orchestrator | 12:59:55.734 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.734779 | orchestrator | 12:59:55.734 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 12:59:55.734813 | orchestrator | 12:59:55.734 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.734848 | orchestrator | 12:59:55.734 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 12:59:55.734868 | orchestrator | 12:59:55.734 STDOUT terraform:  } 2025-07-12 12:59:55.734935 | orchestrator | 12:59:55.734 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-07-12 12:59:55.734997 | orchestrator | 12:59:55.734 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-07-12 12:59:55.735032 | orchestrator | 12:59:55.735 STDOUT terraform:  + fixed_ip = (known after apply) 2025-07-12 12:59:55.735066 | orchestrator | 12:59:55.735 STDOUT terraform:  + floating_ip = (known after apply) 2025-07-12 12:59:55.735101 | orchestrator | 12:59:55.735 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.735136 | orchestrator | 12:59:55.735 STDOUT terraform:  + port_id = (known after apply) 2025-07-12 12:59:55.735170 | orchestrator | 12:59:55.735 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.735192 | orchestrator | 12:59:55.735 STDOUT terraform:  } 2025-07-12 12:59:55.735301 | orchestrator | 12:59:55.735 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-07-12 12:59:55.735356 | orchestrator | 12:59:55.735 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-07-12 12:59:55.735390 | orchestrator | 12:59:55.735 STDOUT terraform:  + address = (known after apply) 2025-07-12 12:59:55.735423 | orchestrator | 12:59:55.735 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:55.735455 | orchestrator | 12:59:55.735 STDOUT terraform:  + dns_domain = (known after apply) 2025-07-12 12:59:55.735488 | orchestrator | 12:59:55.735 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 12:59:55.735520 | orchestrator | 12:59:55.735 STDOUT terraform:  + fixed_ip = (known after apply) 2025-07-12 12:59:55.735551 | orchestrator | 12:59:55.735 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.735578 | orchestrator | 12:59:55.735 STDOUT terraform:  + pool = "public" 2025-07-12 12:59:55.735612 | orchestrator | 12:59:55.735 STDOUT terraform:  + port_id = (known after apply) 2025-07-12 12:59:55.735669 | orchestrator | 12:59:55.735 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.735706 | orchestrator | 12:59:55.735 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 12:59:55.735746 | orchestrator | 12:59:55.735 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:55.735767 | orchestrator | 12:59:55.735 STDOUT terraform:  } 2025-07-12 12:59:55.735819 | orchestrator | 12:59:55.735 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-07-12 12:59:55.735869 | orchestrator | 12:59:55.735 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-07-12 12:59:55.735913 | orchestrator | 12:59:55.735 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 12:59:55.735958 | orchestrator | 12:59:55.735 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:55.735988 | orchestrator | 12:59:55.735 STDOUT terraform:  + availability_zone_hints = [ 2025-07-12 12:59:55.736012 | orchestrator | 12:59:55.735 STDOUT terraform:  + "nova", 2025-07-12 12:59:55.736036 | orchestrator | 12:59:55.736 STDOUT terraform:  ] 2025-07-12 12:59:55.736079 | orchestrator | 12:59:55.736 STDOUT terraform:  + dns_domain = (known after apply) 2025-07-12 12:59:55.736122 | orchestrator | 12:59:55.736 STDOUT terraform:  + external = (known after apply) 2025-07-12 12:59:55.736165 | orchestrator | 12:59:55.736 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.736208 | orchestrator | 12:59:55.736 STDOUT terraform:  + mtu = (known after apply) 2025-07-12 12:59:55.736255 | orchestrator | 12:59:55.736 STDOUT terraform:  + name = "net-testbed-management" 2025-07-12 12:59:55.736297 | orchestrator | 12:59:55.736 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 12:59:55.736340 | orchestrator | 12:59:55.736 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 12:59:55.736386 | orchestrator | 12:59:55.736 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.736432 | orchestrator | 12:59:55.736 STDOUT terraform:  + shared = (known after apply) 2025-07-12 12:59:55.736477 | orchestrator | 12:59:55.736 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:55.736519 | orchestrator | 12:59:55.736 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-07-12 12:59:55.736550 | orchestrator | 12:59:55.736 STDOUT terraform:  + segments (known after apply) 2025-07-12 12:59:55.736571 | orchestrator | 12:59:55.736 STDOUT terraform:  } 2025-07-12 12:59:55.736624 | orchestrator | 12:59:55.736 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-07-12 12:59:55.736690 | orchestrator | 12:59:55.736 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-07-12 12:59:55.736733 | orchestrator | 12:59:55.736 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 12:59:55.736775 | orchestrator | 12:59:55.736 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 12:59:55.736818 | orchestrator | 12:59:55.736 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 12:59:55.736861 | orchestrator | 12:59:55.736 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:55.736903 | orchestrator | 12:59:55.736 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 12:59:55.736951 | orchestrator | 12:59:55.736 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 12:59:55.736993 | orchestrator | 12:59:55.736 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 12:59:55.737036 | orchestrator | 12:59:55.737 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 12:59:55.737078 | orchestrator | 12:59:55.737 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.737120 | orchestrator | 12:59:55.737 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 12:59:55.737163 | orchestrator | 12:59:55.737 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 12:59:55.737205 | orchestrator | 12:59:55.737 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 12:59:55.737246 | orchestrator | 12:59:55.737 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 12:59:55.737288 | orchestrator | 12:59:55.737 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.737329 | orchestrator | 12:59:55.737 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 12:59:55.737373 | orchestrator | 12:59:55.737 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:55.737399 | orchestrator | 12:59:55.737 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:55.737435 | orchestrator | 12:59:55.737 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 12:59:55.737456 | orchestrator | 12:59:55.737 STDOUT terraform:  } 2025-07-12 12:59:55.737482 | orchestrator | 12:59:55.737 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:55.737517 | orchestrator | 12:59:55.737 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 12:59:55.737537 | orchestrator | 12:59:55.737 STDOUT terraform:  } 2025-07-12 12:59:55.737566 | orchestrator | 12:59:55.737 STDOUT terraform:  + binding (known after apply) 2025-07-12 12:59:55.737587 | orchestrator | 12:59:55.737 STDOUT terraform:  + fixed_ip { 2025-07-12 12:59:55.737618 | orchestrator | 12:59:55.737 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-07-12 12:59:55.737679 | orchestrator | 12:59:55.737 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 12:59:55.737703 | orchestrator | 12:59:55.737 STDOUT terraform:  } 2025-07-12 12:59:55.737723 | orchestrator | 12:59:55.737 STDOUT terraform:  } 2025-07-12 12:59:55.737775 | orchestrator | 12:59:55.737 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-07-12 12:59:55.737825 | orchestrator | 12:59:55.737 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-12 12:59:55.737874 | orchestrator | 12:59:55.737 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 12:59:55.737917 | orchestrator | 12:59:55.737 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 12:59:55.737958 | orchestrator | 12:59:55.737 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 12:59:55.738000 | orchestrator | 12:59:55.737 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:55.738058 | orchestrator | 12:59:55.738 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 12:59:55.738106 | orchestrator | 12:59:55.738 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 12:59:55.738149 | orchestrator | 12:59:55.738 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 12:59:55.738193 | orchestrator | 12:59:55.738 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 12:59:55.738238 | orchestrator | 12:59:55.738 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.738284 | orchestrator | 12:59:55.738 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 12:59:55.738327 | orchestrator | 12:59:55.738 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 12:59:55.738369 | orchestrator | 12:59:55.738 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 12:59:55.738411 | orchestrator | 12:59:55.738 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 12:59:55.738453 | orchestrator | 12:59:55.738 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.738495 | orchestrator | 12:59:55.738 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 12:59:55.738538 | orchestrator | 12:59:55.738 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:55.738564 | orchestrator | 12:59:55.738 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:55.738601 | orchestrator | 12:59:55.738 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 12:59:55.738623 | orchestrator | 12:59:55.738 STDOUT terraform:  } 2025-07-12 12:59:55.738665 | orchestrator | 12:59:55.738 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:55.738701 | orchestrator | 12:59:55.738 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-12 12:59:55.738723 | orchestrator | 12:59:55.738 STDOUT terraform:  } 2025-07-12 12:59:55.738749 | orchestrator | 12:59:55.738 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:55.738784 | orchestrator | 12:59:55.738 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 12:59:55.738805 | orchestrator | 12:59:55.738 STDOUT terraform:  } 2025-07-12 12:59:55.738831 | orchestrator | 12:59:55.738 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:55.738866 | orchestrator | 12:59:55.738 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-12 12:59:55.738890 | orchestrator | 12:59:55.738 STDOUT terraform:  } 2025-07-12 12:59:55.738920 | orchestrator | 12:59:55.738 STDOUT terraform:  + binding (known after apply) 2025-07-12 12:59:55.738944 | orchestrator | 12:59:55.738 STDOUT terraform:  + fixed_ip { 2025-07-12 12:59:55.738976 | orchestrator | 12:59:55.738 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-07-12 12:59:55.739013 | orchestrator | 12:59:55.738 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 12:59:55.739033 | orchestrator | 12:59:55.739 STDOUT terraform:  } 2025-07-12 12:59:55.739054 | orchestrator | 12:59:55.739 STDOUT terraform:  } 2025-07-12 12:59:55.739105 | orchestrator | 12:59:55.739 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-07-12 12:59:55.739156 | orchestrator | 12:59:55.739 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-12 12:59:55.739204 | orchestrator | 12:59:55.739 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 12:59:55.739246 | orchestrator | 12:59:55.739 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 12:59:55.739288 | orchestrator | 12:59:55.739 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 12:59:55.739333 | orchestrator | 12:59:55.739 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:55.739379 | orchestrator | 12:59:55.739 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 12:59:55.739421 | orchestrator | 12:59:55.739 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 12:59:55.739462 | orchestrator | 12:59:55.739 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 12:59:55.739505 | orchestrator | 12:59:55.739 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 12:59:55.739547 | orchestrator | 12:59:55.739 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.739590 | orchestrator | 12:59:55.739 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 12:59:55.739631 | orchestrator | 12:59:55.739 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 12:59:55.739685 | orchestrator | 12:59:55.739 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 12:59:55.739728 | orchestrator | 12:59:55.739 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 12:59:55.739770 | orchestrator | 12:59:55.739 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.739810 | orchestrator | 12:59:55.739 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 12:59:55.739852 | orchestrator | 12:59:55.739 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:55.739878 | orchestrator | 12:59:55.739 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:55.739912 | orchestrator | 12:59:55.739 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 12:59:55.739931 | orchestrator | 12:59:55.739 STDOUT terraform:  } 2025-07-12 12:59:55.739957 | orchestrator | 12:59:55.739 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:55.739992 | orchestrator | 12:59:55.739 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-12 12:59:55.740013 | orchestrator | 12:59:55.740 STDOUT terraform:  } 2025-07-12 12:59:55.740038 | orchestrator | 12:59:55.740 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:55.740074 | orchestrator | 12:59:55.740 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 12:59:55.740096 | orchestrator | 12:59:55.740 STDOUT terraform:  } 2025-07-12 12:59:55.740122 | orchestrator | 12:59:55.740 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:55.740155 | orchestrator | 12:59:55.740 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-12 12:59:55.740175 | orchestrator | 12:59:55.740 STDOUT terraform:  } 2025-07-12 12:59:55.740205 | orchestrator | 12:59:55.740 STDOUT terraform:  + binding (known after apply) 2025-07-12 12:59:55.740226 | orchestrator | 12:59:55.740 STDOUT terraform:  + fixed_ip { 2025-07-12 12:59:55.740261 | orchestrator | 12:59:55.740 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-07-12 12:59:55.740298 | orchestrator | 12:59:55.740 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 12:59:55.740318 | orchestrator | 12:59:55.740 STDOUT terraform:  } 2025-07-12 12:59:55.740338 | orchestrator | 12:59:55.740 STDOUT terraform:  } 2025-07-12 12:59:55.740390 | orchestrator | 12:59:55.740 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-07-12 12:59:55.740442 | orchestrator | 12:59:55.740 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-12 12:59:55.740483 | orchestrator | 12:59:55.740 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 12:59:55.740526 | orchestrator | 12:59:55.740 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 12:59:55.740568 | orchestrator | 12:59:55.740 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 12:59:55.740609 | orchestrator | 12:59:55.740 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:55.740674 | orchestrator | 12:59:55.740 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 12:59:55.740717 | orchestrator | 12:59:55.740 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 12:59:55.740760 | orchestrator | 12:59:55.740 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 12:59:55.740802 | orchestrator | 12:59:55.740 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 12:59:55.740848 | orchestrator | 12:59:55.740 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.740890 | orchestrator | 12:59:55.740 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 12:59:55.740932 | orchestrator | 12:59:55.740 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 12:59:55.740973 | orchestrator | 12:59:55.740 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 12:59:55.741016 | orchestrator | 12:59:55.740 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 12:59:55.741060 | orchestrator | 12:59:55.741 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.741102 | orchestrator | 12:59:55.741 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 12:59:55.741144 | orchestrator | 12:59:55.741 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:55.741172 | orchestrator | 12:59:55.741 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:55.741207 | orchestrator | 12:59:55.741 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 12:59:55.741229 | orchestrator | 12:59:55.741 STDOUT terraform:  } 2025-07-12 12:59:55.741256 | orchestrator | 12:59:55.741 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:55.741292 | orchestrator | 12:59:55.741 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-12 12:59:55.741312 | orchestrator | 12:59:55.741 STDOUT terraform:  } 2025-07-12 12:59:55.741338 | orchestrator | 12:59:55.741 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:55.741373 | orchestrator | 12:59:55.741 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 12:59:55.741398 | orchestrator | 12:59:55.741 STDOUT terraform:  } 2025-07-12 12:59:55.741424 | orchestrator | 12:59:55.741 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:55.741463 | orchestrator | 12:59:55.741 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-12 12:59:55.741485 | orchestrator | 12:59:55.741 STDOUT terraform:  } 2025-07-12 12:59:55.741515 | orchestrator | 12:59:55.741 STDOUT terraform:  + binding (known after apply) 2025-07-12 12:59:55.741538 | orchestrator | 12:59:55.741 STDOUT terraform:  + fixed_ip { 2025-07-12 12:59:55.741569 | orchestrator | 12:59:55.741 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-07-12 12:59:55.741607 | orchestrator | 12:59:55.741 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 12:59:55.741628 | orchestrator | 12:59:55.741 STDOUT terraform:  } 2025-07-12 12:59:55.741661 | orchestrator | 12:59:55.741 STDOUT terraform:  } 2025-07-12 12:59:55.741713 | orchestrator | 12:59:55.741 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-07-12 12:59:55.741763 | orchestrator | 12:59:55.741 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-12 12:59:55.741804 | orchestrator | 12:59:55.741 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 12:59:55.741846 | orchestrator | 12:59:55.741 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 12:59:55.741886 | orchestrator | 12:59:55.741 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 12:59:55.741929 | orchestrator | 12:59:55.741 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:55.741970 | orchestrator | 12:59:55.741 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 12:59:55.742036 | orchestrator | 12:59:55.741 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 12:59:55.742085 | orchestrator | 12:59:55.742 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 12:59:55.742129 | orchestrator | 12:59:55.742 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 12:59:55.742177 | orchestrator | 12:59:55.742 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.742220 | orchestrator | 12:59:55.742 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 12:59:55.742261 | orchestrator | 12:59:55.742 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 12:59:55.742302 | orchestrator | 12:59:55.742 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 12:59:55.742343 | orchestrator | 12:59:55.742 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 12:59:55.742386 | orchestrator | 12:59:55.742 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.742428 | orchestrator | 12:59:55.742 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 12:59:55.742469 | orchestrator | 12:59:55.742 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:55.742495 | orchestrator | 12:59:55.742 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:55.742529 | orchestrator | 12:59:55.742 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 12:59:55.742554 | orchestrator | 12:59:55.742 STDOUT terraform:  } 2025-07-12 12:59:55.742579 | orchestrator | 12:59:55.742 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:55.742613 | orchestrator | 12:59:55.742 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-12 12:59:55.742642 | orchestrator | 12:59:55.742 STDOUT terraform:  } 2025-07-12 12:59:55.742672 | orchestrator | 12:59:55.742 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:55.742706 | orchestrator | 12:59:55.742 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 12:59:55.742727 | orchestrator | 12:59:55.742 STDOUT terraform:  } 2025-07-12 12:59:55.742753 | orchestrator | 12:59:55.742 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:55.742787 | orchestrator | 12:59:55.742 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-12 12:59:55.742806 | orchestrator | 12:59:55.742 STDOUT terraform:  } 2025-07-12 12:59:55.742836 | orchestrator | 12:59:55.742 STDOUT terraform:  + binding (known after apply) 2025-07-12 12:59:55.742856 | orchestrator | 12:59:55.742 STDOUT terraform:  + fixed_ip { 2025-07-12 12:59:55.742886 | orchestrator | 12:59:55.742 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-07-12 12:59:55.742921 | orchestrator | 12:59:55.742 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 12:59:55.742943 | orchestrator | 12:59:55.742 STDOUT terraform:  } 2025-07-12 12:59:55.742963 | orchestrator | 12:59:55.742 STDOUT terraform:  } 2025-07-12 12:59:55.743014 | orchestrator | 12:59:55.742 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-07-12 12:59:55.743066 | orchestrator | 12:59:55.743 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-12 12:59:55.743107 | orchestrator | 12:59:55.743 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 12:59:55.743149 | orchestrator | 12:59:55.743 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 12:59:55.743190 | orchestrator | 12:59:55.743 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 12:59:55.743232 | orchestrator | 12:59:55.743 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:55.743274 | orchestrator | 12:59:55.743 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 12:59:55.743315 | orchestrator | 12:59:55.743 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 12:59:55.743356 | orchestrator | 12:59:55.743 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 12:59:55.743397 | orchestrator | 12:59:55.743 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 12:59:55.743439 | orchestrator | 12:59:55.743 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.743482 | orchestrator | 12:59:55.743 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 12:59:55.743523 | orchestrator | 12:59:55.743 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 12:59:55.743567 | orchestrator | 12:59:55.743 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 12:59:55.743613 | orchestrator | 12:59:55.743 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 12:59:55.743677 | orchestrator | 12:59:55.743 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.743721 | orchestrator | 12:59:55.743 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 12:59:55.743763 | orchestrator | 12:59:55.743 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:55.743789 | orchestrator | 12:59:55.743 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:55.743826 | orchestrator | 12:59:55.743 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 12:59:55.743847 | orchestrator | 12:59:55.743 STDOUT terraform:  } 2025-07-12 12:59:55.743873 | orchestrator | 12:59:55.743 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:55.743910 | orchestrator | 12:59:55.743 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-12 12:59:55.743930 | orchestrator | 12:59:55.743 STDOUT terraform:  } 2025-07-12 12:59:55.743956 | orchestrator | 12:59:55.743 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:55.743991 | orchestrator | 12:59:55.743 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 12:59:55.744011 | orchestrator | 12:59:55.743 STDOUT terraform:  } 2025-07-12 12:59:55.744038 | orchestrator | 12:59:55.744 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:55.744074 | orchestrator | 12:59:55.744 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-12 12:59:55.744094 | orchestrator | 12:59:55.744 STDOUT terraform:  } 2025-07-12 12:59:55.744125 | orchestrator | 12:59:55.744 STDOUT terraform:  + binding (known after apply) 2025-07-12 12:59:55.744146 | orchestrator | 12:59:55.744 STDOUT terraform:  + fixed_ip { 2025-07-12 12:59:55.744179 | orchestrator | 12:59:55.744 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-07-12 12:59:55.744216 | orchestrator | 12:59:55.744 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 12:59:55.744237 | orchestrator | 12:59:55.744 STDOUT terraform:  } 2025-07-12 12:59:55.744257 | orchestrator | 12:59:55.744 STDOUT terraform:  } 2025-07-12 12:59:55.744309 | orchestrator | 12:59:55.744 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-07-12 12:59:55.744361 | orchestrator | 12:59:55.744 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-12 12:59:55.744403 | orchestrator | 12:59:55.744 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 12:59:55.744445 | orchestrator | 12:59:55.744 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 12:59:55.744486 | orchestrator | 12:59:55.744 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 12:59:55.744532 | orchestrator | 12:59:55.744 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:55.744574 | orchestrator | 12:59:55.744 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 12:59:55.744618 | orchestrator | 12:59:55.744 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 12:59:55.744679 | orchestrator | 12:59:55.744 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 12:59:55.744727 | orchestrator | 12:59:55.744 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 12:59:55.744770 | orchestrator | 12:59:55.744 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.744812 | orchestrator | 12:59:55.744 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 12:59:55.744854 | orchestrator | 12:59:55.744 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 12:59:55.744895 | orchestrator | 12:59:55.744 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 12:59:55.744937 | orchestrator | 12:59:55.744 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 12:59:55.744981 | orchestrator | 12:59:55.744 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.745022 | orchestrator | 12:59:55.744 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 12:59:55.745066 | orchestrator | 12:59:55.745 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:55.745093 | orchestrator | 12:59:55.745 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:55.745129 | orchestrator | 12:59:55.745 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 12:59:55.745150 | orchestrator | 12:59:55.745 STDOUT terraform:  } 2025-07-12 12:59:55.745176 | orchestrator | 12:59:55.745 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:55.745211 | orchestrator | 12:59:55.745 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-12 12:59:55.745231 | orchestrator | 12:59:55.745 STDOUT terraform:  } 2025-07-12 12:59:55.745257 | orchestrator | 12:59:55.745 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:55.745291 | orchestrator | 12:59:55.745 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 12:59:55.745311 | orchestrator | 12:59:55.745 STDOUT terraform:  } 2025-07-12 12:59:55.745337 | orchestrator | 12:59:55.745 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:55.745372 | orchestrator | 12:59:55.745 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-12 12:59:55.745393 | orchestrator | 12:59:55.745 STDOUT terraform:  } 2025-07-12 12:59:55.745423 | orchestrator | 12:59:55.745 STDOUT terraform:  + binding (known after apply) 2025-07-12 12:59:55.745443 | orchestrator | 12:59:55.745 STDOUT terraform:  + fixed_ip { 2025-07-12 12:59:55.745476 | orchestrator | 12:59:55.745 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-07-12 12:59:55.745512 | orchestrator | 12:59:55.745 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 12:59:55.745532 | orchestrator | 12:59:55.745 STDOUT terraform:  } 2025-07-12 12:59:55.745552 | orchestrator | 12:59:55.745 STDOUT terraform:  } 2025-07-12 12:59:55.745606 | orchestrator | 12:59:55.745 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-07-12 12:59:55.745671 | orchestrator | 12:59:55.745 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-07-12 12:59:55.745699 | orchestrator | 12:59:55.745 STDOUT terraform:  + force_destroy = false 2025-07-12 12:59:55.745735 | orchestrator | 12:59:55.745 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.745775 | orchestrator | 12:59:55.745 STDOUT terraform:  + port_id = (known after apply) 2025-07-12 12:59:55.745810 | orchestrator | 12:59:55.745 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.745846 | orchestrator | 12:59:55.745 STDOUT terraform:  + router_id = (known after apply) 2025-07-12 12:59:55.745881 | orchestrator | 12:59:55.745 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 12:59:55.745901 | orchestrator | 12:59:55.745 STDOUT terraform:  } 2025-07-12 12:59:55.745943 | orchestrator | 12:59:55.745 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-07-12 12:59:55.745985 | orchestrator | 12:59:55.745 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-07-12 12:59:55.746050 | orchestrator | 12:59:55.745 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 12:59:55.746095 | orchestrator | 12:59:55.746 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:55.746126 | orchestrator | 12:59:55.746 STDOUT terraform:  + availability_zone_hints = [ 2025-07-12 12:59:55.746149 | orchestrator | 12:59:55.746 STDOUT terraform:  + "nova", 2025-07-12 12:59:55.746170 | orchestrator | 12:59:55.746 STDOUT terraform:  ] 2025-07-12 12:59:55.746212 | orchestrator | 12:59:55.746 STDOUT terraform:  + distributed = (known after apply) 2025-07-12 12:59:55.746255 | orchestrator | 12:59:55.746 STDOUT terraform:  + enable_snat = (known after apply) 2025-07-12 12:59:55.746313 | orchestrator | 12:59:55.746 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-07-12 12:59:55.746360 | orchestrator | 12:59:55.746 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-07-12 12:59:55.746404 | orchestrator | 12:59:55.746 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.746440 | orchestrator | 12:59:55.746 STDOUT terraform:  + name = "testbed" 2025-07-12 12:59:55.746483 | orchestrator | 12:59:55.746 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.746527 | orchestrator | 12:59:55.746 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:55.746563 | orchestrator | 12:59:55.746 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-07-12 12:59:55.746583 | orchestrator | 12:59:55.746 STDOUT terraform:  } 2025-07-12 12:59:55.746654 | orchestrator | 12:59:55.746 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-07-12 12:59:55.746714 | orchestrator | 12:59:55.746 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-07-12 12:59:55.746746 | orchestrator | 12:59:55.746 STDOUT terraform:  + description = "ssh" 2025-07-12 12:59:55.746782 | orchestrator | 12:59:55.746 STDOUT terraform:  + direction = "ingress" 2025-07-12 12:59:55.746814 | orchestrator | 12:59:55.746 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 12:59:55.746857 | orchestrator | 12:59:55.746 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.746887 | orchestrator | 12:59:55.746 STDOUT terraform:  + port_range_max = 22 2025-07-12 12:59:55.746918 | orchestrator | 12:59:55.746 STDOUT terraform:  + port_range_min = 22 2025-07-12 12:59:55.746956 | orchestrator | 12:59:55.746 STDOUT terraform:  + protocol = "tcp" 2025-07-12 12:59:55.746999 | orchestrator | 12:59:55.746 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.747042 | orchestrator | 12:59:55.747 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 12:59:55.747084 | orchestrator | 12:59:55.747 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 12:59:55.747121 | orchestrator | 12:59:55.747 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 12:59:55.747164 | orchestrator | 12:59:55.747 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 12:59:55.747211 | orchestrator | 12:59:55.747 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:55.747231 | orchestrator | 12:59:55.747 STDOUT terraform:  } 2025-07-12 12:59:55.747291 | orchestrator | 12:59:55.747 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-07-12 12:59:55.747350 | orchestrator | 12:59:55.747 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-07-12 12:59:55.747388 | orchestrator | 12:59:55.747 STDOUT terraform:  + description = "wireguard" 2025-07-12 12:59:55.747423 | orchestrator | 12:59:55.747 STDOUT terraform:  + direction = "ingress" 2025-07-12 12:59:55.747454 | orchestrator | 12:59:55.747 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 12:59:55.747499 | orchestrator | 12:59:55.747 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.747530 | orchestrator | 12:59:55.747 STDOUT terraform:  + port_range_max = 51820 2025-07-12 12:59:55.747562 | orchestrator | 12:59:55.747 STDOUT terraform:  + port_range_min = 51820 2025-07-12 12:59:55.747593 | orchestrator | 12:59:55.747 STDOUT terraform:  + protocol = "udp" 2025-07-12 12:59:55.747665 | orchestrator | 12:59:55.747 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.747723 | orchestrator | 12:59:55.747 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 12:59:55.747767 | orchestrator | 12:59:55.747 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 12:59:55.747805 | orchestrator | 12:59:55.747 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 12:59:55.747852 | orchestrator | 12:59:55.747 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 12:59:55.747896 | orchestrator | 12:59:55.747 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:55.747916 | orchestrator | 12:59:55.747 STDOUT terraform:  } 2025-07-12 12:59:55.747975 | orchestrator | 12:59:55.747 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-07-12 12:59:55.748035 | orchestrator | 12:59:55.747 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-07-12 12:59:55.748071 | orchestrator | 12:59:55.748 STDOUT terraform:  + direction = "ingress" 2025-07-12 12:59:55.748102 | orchestrator | 12:59:55.748 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 12:59:55.748150 | orchestrator | 12:59:55.748 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.748181 | orchestrator | 12:59:55.748 STDOUT terraform:  + protocol = "tcp" 2025-07-12 12:59:55.748223 | orchestrator | 12:59:55.748 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.748266 | orchestrator | 12:59:55.748 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 12:59:55.748308 | orchestrator | 12:59:55.748 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 12:59:55.748349 | orchestrator | 12:59:55.748 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-07-12 12:59:55.748391 | orchestrator | 12:59:55.748 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 12:59:55.748434 | orchestrator | 12:59:55.748 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:55.748454 | orchestrator | 12:59:55.748 STDOUT terraform:  } 2025-07-12 12:59:55.748512 | orchestrator | 12:59:55.748 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-07-12 12:59:55.748569 | orchestrator | 12:59:55.748 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-07-12 12:59:55.748604 | orchestrator | 12:59:55.748 STDOUT terraform:  + direction = "ingress" 2025-07-12 12:59:55.748646 | orchestrator | 12:59:55.748 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 12:59:55.748689 | orchestrator | 12:59:55.748 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.748722 | orchestrator | 12:59:55.748 STDOUT terraform:  + protocol = "udp" 2025-07-12 12:59:55.748763 | orchestrator | 12:59:55.748 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.748805 | orchestrator | 12:59:55.748 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 12:59:55.748846 | orchestrator | 12:59:55.748 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 12:59:55.748886 | orchestrator | 12:59:55.748 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-07-12 12:59:55.748927 | orchestrator | 12:59:55.748 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 12:59:55.748970 | orchestrator | 12:59:55.748 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:55.748988 | orchestrator | 12:59:55.748 STDOUT terraform:  } 2025-07-12 12:59:55.749851 | orchestrator | 12:59:55.748 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-07-12 12:59:55.749935 | orchestrator | 12:59:55.749 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-07-12 12:59:55.749975 | orchestrator | 12:59:55.749 STDOUT terraform:  + direction = "ingress" 2025-07-12 12:59:55.750007 | orchestrator | 12:59:55.749 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 12:59:55.750078 | orchestrator | 12:59:55.750 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.750109 | orchestrator | 12:59:55.750 STDOUT terraform:  + protocol = "icmp" 2025-07-12 12:59:55.750165 | orchestrator | 12:59:55.750 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.750208 | orchestrator | 12:59:55.750 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 12:59:55.750250 | orchestrator | 12:59:55.750 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 12:59:55.750286 | orchestrator | 12:59:55.750 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 12:59:55.750328 | orchestrator | 12:59:55.750 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 12:59:55.750370 | orchestrator | 12:59:55.750 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:55.750389 | orchestrator | 12:59:55.750 STDOUT terraform:  } 2025-07-12 12:59:55.750450 | orchestrator | 12:59:55.750 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-07-12 12:59:55.750505 | orchestrator | 12:59:55.750 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-07-12 12:59:55.750540 | orchestrator | 12:59:55.750 STDOUT terraform:  + direction = "ingress" 2025-07-12 12:59:55.750570 | orchestrator | 12:59:55.750 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 12:59:55.750612 | orchestrator | 12:59:55.750 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.750677 | orchestrator | 12:59:55.750 STDOUT terraform:  + protocol = "tcp" 2025-07-12 12:59:55.750722 | orchestrator | 12:59:55.750 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.750763 | orchestrator | 12:59:55.750 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 12:59:55.750806 | orchestrator | 12:59:55.750 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 12:59:55.750841 | orchestrator | 12:59:55.750 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 12:59:55.750884 | orchestrator | 12:59:55.750 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 12:59:55.750926 | orchestrator | 12:59:55.750 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:55.750945 | orchestrator | 12:59:55.750 STDOUT terraform:  } 2025-07-12 12:59:55.751000 | orchestrator | 12:59:55.750 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-07-12 12:59:55.751055 | orchestrator | 12:59:55.751 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-07-12 12:59:55.751089 | orchestrator | 12:59:55.751 STDOUT terraform:  + direction = "ingress" 2025-07-12 12:59:55.751120 | orchestrator | 12:59:55.751 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 12:59:55.751163 | orchestrator | 12:59:55.751 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.751194 | orchestrator | 12:59:55.751 STDOUT terraform:  + protocol = "udp" 2025-07-12 12:59:55.751237 | orchestrator | 12:59:55.751 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.751277 | orchestrator | 12:59:55.751 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 12:59:55.751320 | orchestrator | 12:59:55.751 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 12:59:55.751361 | orchestrator | 12:59:55.751 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 12:59:55.751401 | orchestrator | 12:59:55.751 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 12:59:55.751443 | orchestrator | 12:59:55.751 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:55.751462 | orchestrator | 12:59:55.751 STDOUT terraform:  } 2025-07-12 12:59:55.751518 | orchestrator | 12:59:55.751 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-07-12 12:59:55.751573 | orchestrator | 12:59:55.751 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-07-12 12:59:55.751608 | orchestrator | 12:59:55.751 STDOUT terraform:  + direction = "ingress" 2025-07-12 12:59:55.751655 | orchestrator | 12:59:55.751 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 12:59:55.751698 | orchestrator | 12:59:55.751 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.751731 | orchestrator | 12:59:55.751 STDOUT terraform:  + protocol = "icmp" 2025-07-12 12:59:55.751773 | orchestrator | 12:59:55.751 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.751814 | orchestrator | 12:59:55.751 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 12:59:55.751855 | orchestrator | 12:59:55.751 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 12:59:55.751891 | orchestrator | 12:59:55.751 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 12:59:55.751932 | orchestrator | 12:59:55.751 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 12:59:55.751973 | orchestrator | 12:59:55.751 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:55.751992 | orchestrator | 12:59:55.751 STDOUT terraform:  } 2025-07-12 12:59:55.752048 | orchestrator | 12:59:55.752 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-07-12 12:59:55.752103 | orchestrator | 12:59:55.752 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-07-12 12:59:55.752134 | orchestrator | 12:59:55.752 STDOUT terraform:  + description = "vrrp" 2025-07-12 12:59:55.752169 | orchestrator | 12:59:55.752 STDOUT terraform:  + direction = "ingress" 2025-07-12 12:59:55.752200 | orchestrator | 12:59:55.752 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 12:59:55.752264 | orchestrator | 12:59:55.752 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.752296 | orchestrator | 12:59:55.752 STDOUT terraform:  + protocol = "112" 2025-07-12 12:59:55.752340 | orchestrator | 12:59:55.752 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.752380 | orchestrator | 12:59:55.752 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 12:59:55.752422 | orchestrator | 12:59:55.752 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 12:59:55.752458 | orchestrator | 12:59:55.752 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 12:59:55.752503 | orchestrator | 12:59:55.752 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 12:59:55.752545 | orchestrator | 12:59:55.752 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:55.752565 | orchestrator | 12:59:55.752 STDOUT terraform:  } 2025-07-12 12:59:55.752618 | orchestrator | 12:59:55.752 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-07-12 12:59:55.752684 | orchestrator | 12:59:55.752 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-07-12 12:59:55.752719 | orchestrator | 12:59:55.752 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:55.752758 | orchestrator | 12:59:55.752 STDOUT terraform:  + description = "management security group" 2025-07-12 12:59:55.752792 | orchestrator | 12:59:55.752 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.752826 | orchestrator | 12:59:55.752 STDOUT terraform:  + name = "testbed-management" 2025-07-12 12:59:55.752859 | orchestrator | 12:59:55.752 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.752893 | orchestrator | 12:59:55.752 STDOUT terraform:  + stateful = (known after apply) 2025-07-12 12:59:55.752926 | orchestrator | 12:59:55.752 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:55.752945 | orchestrator | 12:59:55.752 STDOUT terraform:  } 2025-07-12 12:59:55.753001 | orchestrator | 12:59:55.752 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-07-12 12:59:55.753053 | orchestrator | 12:59:55.753 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-07-12 12:59:55.753088 | orchestrator | 12:59:55.753 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:55.753123 | orchestrator | 12:59:55.753 STDOUT terraform:  + description = "node security group" 2025-07-12 12:59:55.753157 | orchestrator | 12:59:55.753 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.753187 | orchestrator | 12:59:55.753 STDOUT terraform:  + name = "testbed-node" 2025-07-12 12:59:55.753221 | orchestrator | 12:59:55.753 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.753254 | orchestrator | 12:59:55.753 STDOUT terraform:  + stateful = (known after apply) 2025-07-12 12:59:55.753288 | orchestrator | 12:59:55.753 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:55.753307 | orchestrator | 12:59:55.753 STDOUT terraform:  } 2025-07-12 12:59:55.753357 | orchestrator | 12:59:55.753 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-07-12 12:59:55.753407 | orchestrator | 12:59:55.753 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-07-12 12:59:55.753442 | orchestrator | 12:59:55.753 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:55.753478 | orchestrator | 12:59:55.753 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-07-12 12:59:55.753505 | orchestrator | 12:59:55.753 STDOUT terraform:  + dns_nameservers = [ 2025-07-12 12:59:55.753528 | orchestrator | 12:59:55.753 STDOUT terraform:  + "8.8.8.8", 2025-07-12 12:59:55.753550 | orchestrator | 12:59:55.753 STDOUT terraform:  + "9.9.9.9", 2025-07-12 12:59:55.753575 | orchestrator | 12:59:55.753 STDOUT terraform:  ] 2025-07-12 12:59:55.753601 | orchestrator | 12:59:55.753 STDOUT terraform:  + enable_dhcp = true 2025-07-12 12:59:55.753649 | orchestrator | 12:59:55.753 STDOUT terraform:  + gateway_ip = (known after apply) 2025-07-12 12:59:55.753687 | orchestrator | 12:59:55.753 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.753713 | orchestrator | 12:59:55.753 STDOUT terraform:  + ip_version = 4 2025-07-12 12:59:55.753749 | orchestrator | 12:59:55.753 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-07-12 12:59:55.753785 | orchestrator | 12:59:55.753 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-07-12 12:59:55.753884 | orchestrator | 12:59:55.753 STDOUT terraform:  + name = "subnet-testbed-management" 2025-07-12 12:59:55.753925 | orchestrator | 12:59:55.753 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 12:59:55.753952 | orchestrator | 12:59:55.753 STDOUT terraform:  + no_gateway = false 2025-07-12 12:59:55.753989 | orchestrator | 12:59:55.753 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:55.754040 | orchestrator | 12:59:55.753 STDOUT terraform:  + service_types = (known after apply) 2025-07-12 12:59:55.754084 | orchestrator | 12:59:55.754 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:55.754110 | orchestrator | 12:59:55.754 STDOUT terraform:  + allocation_pool { 2025-07-12 12:59:55.754142 | orchestrator | 12:59:55.754 STDOUT terraform:  + end = "192.168.31.250" 2025-07-12 12:59:55.754172 | orchestrator | 12:59:55.754 STDOUT terraform:  + start = "192.168.31.200" 2025-07-12 12:59:55.754192 | orchestrator | 12:59:55.754 STDOUT terraform:  } 2025-07-12 12:59:55.754212 | orchestrator | 12:59:55.754 STDOUT terraform:  } 2025-07-12 12:59:55.754243 | orchestrator | 12:59:55.754 STDOUT terraform:  # terraform_data.image will be created 2025-07-12 12:59:55.754274 | orchestrator | 12:59:55.754 STDOUT terraform:  + resource "terraform_data" "image" { 2025-07-12 12:59:55.754305 | orchestrator | 12:59:55.754 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.754331 | orchestrator | 12:59:55.754 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-07-12 12:59:55.754362 | orchestrator | 12:59:55.754 STDOUT terraform:  + output = (known after apply) 2025-07-12 12:59:55.754382 | orchestrator | 12:59:55.754 STDOUT terraform:  } 2025-07-12 12:59:55.754416 | orchestrator | 12:59:55.754 STDOUT terraform:  # terraform_data.image_node will be created 2025-07-12 12:59:55.754454 | orchestrator | 12:59:55.754 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-07-12 12:59:55.754484 | orchestrator | 12:59:55.754 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:55.754512 | orchestrator | 12:59:55.754 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-07-12 12:59:55.754541 | orchestrator | 12:59:55.754 STDOUT terraform:  + output = (known after apply) 2025-07-12 12:59:55.754561 | orchestrator | 12:59:55.754 STDOUT terraform:  } 2025-07-12 12:59:55.754598 | orchestrator | 12:59:55.754 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-07-12 12:59:55.754620 | orchestrator | 12:59:55.754 STDOUT terraform: Changes to Outputs: 2025-07-12 12:59:55.754690 | orchestrator | 12:59:55.754 STDOUT terraform:  + manager_address = (sensitive value) 2025-07-12 12:59:55.754723 | orchestrator | 12:59:55.754 STDOUT terraform:  + private_key = (sensitive value) 2025-07-12 12:59:55.883574 | orchestrator | 12:59:55.883 STDOUT terraform: terraform_data.image: Creating... 2025-07-12 12:59:55.883678 | orchestrator | 12:59:55.883 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=01aa258c-fa91-7d25-2975-02844373c19a] 2025-07-12 12:59:55.958893 | orchestrator | 12:59:55.958 STDOUT terraform: terraform_data.image_node: Creating... 2025-07-12 12:59:55.958990 | orchestrator | 12:59:55.958 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=a6f6a13c-2d6f-7c5b-443e-4c948698135d] 2025-07-12 12:59:55.977302 | orchestrator | 12:59:55.977 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-07-12 12:59:55.994721 | orchestrator | 12:59:55.994 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-07-12 12:59:55.995041 | orchestrator | 12:59:55.994 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-07-12 12:59:55.996534 | orchestrator | 12:59:55.996 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-07-12 12:59:55.997224 | orchestrator | 12:59:55.997 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-07-12 12:59:55.997918 | orchestrator | 12:59:55.997 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-07-12 12:59:56.002149 | orchestrator | 12:59:56.002 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-07-12 12:59:56.002626 | orchestrator | 12:59:56.002 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-07-12 12:59:56.004746 | orchestrator | 12:59:56.004 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-07-12 12:59:56.005102 | orchestrator | 12:59:56.005 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-07-12 12:59:56.441307 | orchestrator | 12:59:56.440 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-07-12 12:59:56.447654 | orchestrator | 12:59:56.447 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-07-12 12:59:56.589129 | orchestrator | 12:59:56.588 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-07-12 12:59:56.598591 | orchestrator | 12:59:56.598 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-07-12 12:59:57.052098 | orchestrator | 12:59:57.051 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=70d4b8d7-2deb-4033-84d0-aa90144931fc] 2025-07-12 12:59:57.059013 | orchestrator | 12:59:57.058 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-07-12 12:59:57.122127 | orchestrator | 12:59:57.121 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-07-12 12:59:57.134408 | orchestrator | 12:59:57.134 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-07-12 12:59:59.600718 | orchestrator | 12:59:59.600 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=b303d5ed-b20f-4882-90f3-23adead236a1] 2025-07-12 12:59:59.615520 | orchestrator | 12:59:59.615 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=80678cc2-85df-4096-9cf9-3a4ced065123] 2025-07-12 12:59:59.619442 | orchestrator | 12:59:59.619 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-07-12 12:59:59.628067 | orchestrator | 12:59:59.627 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-07-12 12:59:59.628825 | orchestrator | 12:59:59.628 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=8077ac2286ccc27d04935dd3f0dcc10f64f50f83] 2025-07-12 12:59:59.635919 | orchestrator | 12:59:59.635 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=fc9809d7a4ce64206700dda1a0ffb983e549a486] 2025-07-12 12:59:59.636602 | orchestrator | 12:59:59.636 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-07-12 12:59:59.639491 | orchestrator | 12:59:59.639 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=1e2296ca-3498-48cf-a25a-293306b54174] 2025-07-12 12:59:59.645145 | orchestrator | 12:59:59.644 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=11616408-d26b-4882-b347-f5b812b9aa41] 2025-07-12 12:59:59.647732 | orchestrator | 12:59:59.647 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-07-12 12:59:59.647777 | orchestrator | 12:59:59.647 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-07-12 12:59:59.653535 | orchestrator | 12:59:59.653 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-07-12 12:59:59.661702 | orchestrator | 12:59:59.661 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=cf6824d0-2336-4864-a32f-bffef7606523] 2025-07-12 12:59:59.667552 | orchestrator | 12:59:59.667 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-07-12 12:59:59.685528 | orchestrator | 12:59:59.685 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=751344b4-b2fa-492b-b080-e9e5b4c67369] 2025-07-12 12:59:59.691718 | orchestrator | 12:59:59.691 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-07-12 12:59:59.739031 | orchestrator | 12:59:59.738 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=bad1a367-9870-4c1b-af18-4999b26662c8] 2025-07-12 12:59:59.744567 | orchestrator | 12:59:59.744 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-07-12 12:59:59.789921 | orchestrator | 12:59:59.789 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=375d9971-3091-4ee7-ad22-0f2ee4316c51] 2025-07-12 12:59:59.848225 | orchestrator | 12:59:59.847 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=ec46bf14-c827-46d0-9a8c-19525aeacad6] 2025-07-12 13:00:00.467096 | orchestrator | 13:00:00.466 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=7e9d1910-06c8-440d-90a7-8db2d3bfab88] 2025-07-12 13:00:00.585462 | orchestrator | 13:00:00.585 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=433e8f9c-a8e1-4b68-b2b7-293c962bb1f4] 2025-07-12 13:00:00.595353 | orchestrator | 13:00:00.595 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-07-12 13:00:03.051665 | orchestrator | 13:00:03.051 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=5d5efef6-d196-4496-8e4e-101ce21afc70] 2025-07-12 13:00:03.064725 | orchestrator | 13:00:03.064 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=aadb0376-3a3f-4e17-8a7c-6b5eb89f12e4] 2025-07-12 13:00:03.101017 | orchestrator | 13:00:03.100 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=7818c38c-8f07-44e8-a255-faa9c2adb8b1] 2025-07-12 13:00:03.131452 | orchestrator | 13:00:03.131 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=e7fde3ea-5d9a-4384-9c3f-e31c3a5c0c1c] 2025-07-12 13:00:03.133383 | orchestrator | 13:00:03.133 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=2dbda7d9-a979-4bbd-9db7-ef0f0263b434] 2025-07-12 13:00:03.146606 | orchestrator | 13:00:03.146 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=99f613dd-b7e8-44ac-806c-7adcee7e2968] 2025-07-12 13:00:04.086571 | orchestrator | 13:00:04.086 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=eea522ad-466a-440e-af98-d86e57035427] 2025-07-12 13:00:04.556215 | orchestrator | 13:00:04.099 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-07-12 13:00:04.556268 | orchestrator | 13:00:04.099 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-07-12 13:00:04.556277 | orchestrator | 13:00:04.102 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-07-12 13:00:04.556284 | orchestrator | 13:00:04.319 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=437966c0-55f7-4ee5-a0f5-38e744ea7920] 2025-07-12 13:00:04.556293 | orchestrator | 13:00:04.334 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-07-12 13:00:04.556315 | orchestrator | 13:00:04.335 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-07-12 13:00:04.556323 | orchestrator | 13:00:04.336 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-07-12 13:00:04.556330 | orchestrator | 13:00:04.341 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-07-12 13:00:04.556336 | orchestrator | 13:00:04.347 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-07-12 13:00:04.556343 | orchestrator | 13:00:04.348 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-07-12 13:00:04.556350 | orchestrator | 13:00:04.350 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=3f022e11-ac7a-4c09-95cd-3e759f096ce9] 2025-07-12 13:00:04.556357 | orchestrator | 13:00:04.353 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-07-12 13:00:04.556364 | orchestrator | 13:00:04.358 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-07-12 13:00:04.556371 | orchestrator | 13:00:04.370 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-07-12 13:00:04.573443 | orchestrator | 13:00:04.573 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=6e5d5d98-6896-4a29-aa11-97dffb7727cc] 2025-07-12 13:00:04.583285 | orchestrator | 13:00:04.583 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-07-12 13:00:05.052661 | orchestrator | 13:00:05.052 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=29daab07-71bf-40ad-8aef-1ed1e51fbeaf] 2025-07-12 13:00:05.060021 | orchestrator | 13:00:05.059 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-07-12 13:00:05.302309 | orchestrator | 13:00:05.301 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=097f793a-c878-4612-b991-edd1ac9e4585] 2025-07-12 13:00:05.311293 | orchestrator | 13:00:05.310 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-07-12 13:00:05.502477 | orchestrator | 13:00:05.502 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 2s [id=66148d11-4146-4b80-b823-a8469406a393] 2025-07-12 13:00:05.508585 | orchestrator | 13:00:05.508 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-07-12 13:00:05.556293 | orchestrator | 13:00:05.555 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 2s [id=75b40426-0ad1-45bf-a81b-2ca7c5e67550] 2025-07-12 13:00:05.557035 | orchestrator | 13:00:05.556 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=e50b00d4-dacb-4471-8452-153d75e9c2cc] 2025-07-12 13:00:05.565339 | orchestrator | 13:00:05.565 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-07-12 13:00:05.565397 | orchestrator | 13:00:05.565 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-07-12 13:00:05.588658 | orchestrator | 13:00:05.588 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 2s [id=ecc06b27-4f0d-45ff-beae-3999d200179f] 2025-07-12 13:00:05.600038 | orchestrator | 13:00:05.599 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-07-12 13:00:05.643118 | orchestrator | 13:00:05.642 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 2s [id=38e2dca7-4040-42a2-98f0-610cc4b0a2f0] 2025-07-12 13:00:05.677427 | orchestrator | 13:00:05.677 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 2s [id=d31e83fe-45ed-4009-9faa-c669bf82ffd4] 2025-07-12 13:00:05.867851 | orchestrator | 13:00:05.867 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=039cdd73-1cec-4443-8b58-e83baae916bf] 2025-07-12 13:00:05.902366 | orchestrator | 13:00:05.901 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 2s [id=ec920d31-8206-4107-bb7a-137404b1ccc3] 2025-07-12 13:00:05.902957 | orchestrator | 13:00:05.902 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=2ad148c4-9858-426a-9678-1641ef74d85a] 2025-07-12 13:00:06.061450 | orchestrator | 13:00:06.061 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=50ef6906-9471-45e8-8695-e372eeeba3d0] 2025-07-12 13:00:06.068988 | orchestrator | 13:00:06.068 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=690e79a9-d0fd-4379-a9fb-2c06dac7db09] 2025-07-12 13:00:06.221721 | orchestrator | 13:00:06.221 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=e295ed0d-95c7-4501-8a85-b49182e96e10] 2025-07-12 13:00:06.508919 | orchestrator | 13:00:06.508 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=0ab14a3f-2edb-4a36-8513-6835fe4dbf8c] 2025-07-12 13:00:07.047149 | orchestrator | 13:00:07.046 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=38e57bf7-668b-4281-8e24-c1e4db10dd1b] 2025-07-12 13:00:07.063154 | orchestrator | 13:00:07.062 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-07-12 13:00:07.079399 | orchestrator | 13:00:07.079 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-07-12 13:00:07.080241 | orchestrator | 13:00:07.080 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-07-12 13:00:07.081052 | orchestrator | 13:00:07.080 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-07-12 13:00:07.083769 | orchestrator | 13:00:07.083 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-07-12 13:00:07.085852 | orchestrator | 13:00:07.085 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-07-12 13:00:07.102555 | orchestrator | 13:00:07.102 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-07-12 13:00:08.555235 | orchestrator | 13:00:08.553 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=3d1e8d48-7c1f-4911-bce1-afa39b954d4d] 2025-07-12 13:00:08.567376 | orchestrator | 13:00:08.566 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-07-12 13:00:08.569003 | orchestrator | 13:00:08.568 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-07-12 13:00:08.570721 | orchestrator | 13:00:08.570 STDOUT terraform: local_file.inventory: Creating... 2025-07-12 13:00:08.572099 | orchestrator | 13:00:08.571 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=8ddf6e54198db723747b7d014d07be84a0de935e] 2025-07-12 13:00:08.574393 | orchestrator | 13:00:08.574 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=7fe1032e4af1faa47e878ef64632d2a3308a0806] 2025-07-12 13:00:09.291450 | orchestrator | 13:00:09.291 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=3d1e8d48-7c1f-4911-bce1-afa39b954d4d] 2025-07-12 13:00:17.080289 | orchestrator | 13:00:17.079 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-07-12 13:00:17.085781 | orchestrator | 13:00:17.085 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-07-12 13:00:17.085883 | orchestrator | 13:00:17.085 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-07-12 13:00:17.085900 | orchestrator | 13:00:17.085 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-07-12 13:00:17.092834 | orchestrator | 13:00:17.092 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-07-12 13:00:17.103090 | orchestrator | 13:00:17.102 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-07-12 13:00:27.080746 | orchestrator | 13:00:27.080 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-07-12 13:00:27.086106 | orchestrator | 13:00:27.085 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-07-12 13:00:27.086203 | orchestrator | 13:00:27.085 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-07-12 13:00:27.086290 | orchestrator | 13:00:27.086 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-07-12 13:00:27.093240 | orchestrator | 13:00:27.093 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-07-12 13:00:27.104181 | orchestrator | 13:00:27.103 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-07-12 13:00:37.082798 | orchestrator | 13:00:37.082 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-07-12 13:00:37.086982 | orchestrator | 13:00:37.086 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-07-12 13:00:37.087532 | orchestrator | 13:00:37.087 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-07-12 13:00:37.087563 | orchestrator | 13:00:37.087 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-07-12 13:00:37.093448 | orchestrator | 13:00:37.093 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-07-12 13:00:37.105017 | orchestrator | 13:00:37.104 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-07-12 13:00:37.646782 | orchestrator | 13:00:37.646 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=f8f6fbfb-9b6e-4a86-bab5-2a1a508edcb3] 2025-07-12 13:00:37.655290 | orchestrator | 13:00:37.655 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=fd05fd5e-b0fb-44e5-80cf-03b51ae526ce] 2025-07-12 13:00:37.775437 | orchestrator | 13:00:37.775 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=da27b81b-659c-477a-9815-d0810aa83c6e] 2025-07-12 13:00:37.807532 | orchestrator | 13:00:37.807 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=92403ba6-c9b7-44d6-b08a-1041f1a85394] 2025-07-12 13:00:47.087474 | orchestrator | 13:00:47.087 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2025-07-12 13:00:47.094666 | orchestrator | 13:00:47.094 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2025-07-12 13:00:47.849788 | orchestrator | 13:00:47.849 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 41s [id=39397cd8-e627-4c5f-9ef5-14566544df6f] 2025-07-12 13:00:48.177531 | orchestrator | 13:00:48.177 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 41s [id=cac78405-44ff-493f-a2d3-694f85af35c1] 2025-07-12 13:00:48.206242 | orchestrator | 13:00:48.206 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-07-12 13:00:48.212246 | orchestrator | 13:00:48.212 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-07-12 13:00:48.213193 | orchestrator | 13:00:48.213 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-07-12 13:00:48.217233 | orchestrator | 13:00:48.217 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=2685028379308719973] 2025-07-12 13:00:48.218696 | orchestrator | 13:00:48.218 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-07-12 13:00:48.218726 | orchestrator | 13:00:48.218 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-07-12 13:00:48.220065 | orchestrator | 13:00:48.219 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-07-12 13:00:48.221402 | orchestrator | 13:00:48.221 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-07-12 13:00:48.221445 | orchestrator | 13:00:48.221 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-07-12 13:00:48.222088 | orchestrator | 13:00:48.222 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-07-12 13:00:48.227739 | orchestrator | 13:00:48.227 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-07-12 13:00:48.240626 | orchestrator | 13:00:48.240 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-07-12 13:00:51.611594 | orchestrator | 13:00:51.610 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=cac78405-44ff-493f-a2d3-694f85af35c1/ec46bf14-c827-46d0-9a8c-19525aeacad6] 2025-07-12 13:00:51.629722 | orchestrator | 13:00:51.629 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=39397cd8-e627-4c5f-9ef5-14566544df6f/11616408-d26b-4882-b347-f5b812b9aa41] 2025-07-12 13:00:51.646089 | orchestrator | 13:00:51.645 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=da27b81b-659c-477a-9815-d0810aa83c6e/1e2296ca-3498-48cf-a25a-293306b54174] 2025-07-12 13:00:51.654267 | orchestrator | 13:00:51.653 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=39397cd8-e627-4c5f-9ef5-14566544df6f/751344b4-b2fa-492b-b080-e9e5b4c67369] 2025-07-12 13:00:51.667789 | orchestrator | 13:00:51.667 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=cac78405-44ff-493f-a2d3-694f85af35c1/bad1a367-9870-4c1b-af18-4999b26662c8] 2025-07-12 13:00:51.669709 | orchestrator | 13:00:51.669 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=da27b81b-659c-477a-9815-d0810aa83c6e/80678cc2-85df-4096-9cf9-3a4ced065123] 2025-07-12 13:00:57.751990 | orchestrator | 13:00:57.751 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=cac78405-44ff-493f-a2d3-694f85af35c1/cf6824d0-2336-4864-a32f-bffef7606523] 2025-07-12 13:00:57.769612 | orchestrator | 13:00:57.769 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=39397cd8-e627-4c5f-9ef5-14566544df6f/b303d5ed-b20f-4882-90f3-23adead236a1] 2025-07-12 13:00:57.776962 | orchestrator | 13:00:57.776 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=da27b81b-659c-477a-9815-d0810aa83c6e/375d9971-3091-4ee7-ad22-0f2ee4316c51] 2025-07-12 13:00:58.249524 | orchestrator | 13:00:58.249 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-07-12 13:01:08.253477 | orchestrator | 13:01:08.253 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-07-12 13:01:09.011100 | orchestrator | 13:01:09.010 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=8d08b6a6-fda2-41a9-b18b-16cecb9d5eef] 2025-07-12 13:01:09.028810 | orchestrator | 13:01:09.028 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-07-12 13:01:09.028969 | orchestrator | 13:01:09.028 STDOUT terraform: Outputs: 2025-07-12 13:01:09.029080 | orchestrator | 13:01:09.029 STDOUT terraform: manager_address = 2025-07-12 13:01:09.029092 | orchestrator | 13:01:09.029 STDOUT terraform: private_key = 2025-07-12 13:01:09.115620 | orchestrator | ok: Runtime: 0:01:21.497237 2025-07-12 13:01:09.143584 | 2025-07-12 13:01:09.143723 | TASK [Fetch manager address] 2025-07-12 13:01:09.577218 | orchestrator | ok 2025-07-12 13:01:09.587003 | 2025-07-12 13:01:09.587141 | TASK [Set manager_host address] 2025-07-12 13:01:09.671232 | orchestrator | ok 2025-07-12 13:01:09.679618 | 2025-07-12 13:01:09.679754 | LOOP [Update ansible collections] 2025-07-12 13:01:13.007874 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-12 13:01:13.008133 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-07-12 13:01:13.008172 | orchestrator | Starting galaxy collection install process 2025-07-12 13:01:13.008197 | orchestrator | Process install dependency map 2025-07-12 13:01:13.008219 | orchestrator | Starting collection install process 2025-07-12 13:01:13.008240 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons' 2025-07-12 13:01:13.008264 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons 2025-07-12 13:01:13.008289 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-07-12 13:01:13.008331 | orchestrator | ok: Item: commons Runtime: 0:00:03.021923 2025-07-12 13:01:14.268273 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-07-12 13:01:14.268419 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-12 13:01:14.268493 | orchestrator | Starting galaxy collection install process 2025-07-12 13:01:14.268556 | orchestrator | Process install dependency map 2025-07-12 13:01:14.268616 | orchestrator | Starting collection install process 2025-07-12 13:01:14.268659 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services' 2025-07-12 13:01:14.268693 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services 2025-07-12 13:01:14.268738 | orchestrator | osism.services:999.0.0 was installed successfully 2025-07-12 13:01:14.268794 | orchestrator | ok: Item: services Runtime: 0:00:01.003136 2025-07-12 13:01:14.289571 | 2025-07-12 13:01:14.289704 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-07-12 13:01:24.737954 | orchestrator | ok 2025-07-12 13:01:24.750020 | 2025-07-12 13:01:24.750166 | TASK [Wait a little longer for the manager so that everything is ready] 2025-07-12 13:02:24.799407 | orchestrator | ok 2025-07-12 13:02:24.809280 | 2025-07-12 13:02:24.809401 | TASK [Fetch manager ssh hostkey] 2025-07-12 13:02:26.379661 | orchestrator | Output suppressed because no_log was given 2025-07-12 13:02:26.386907 | 2025-07-12 13:02:26.387079 | TASK [Get ssh keypair from terraform environment] 2025-07-12 13:02:26.919367 | orchestrator | ok: Runtime: 0:00:00.009912 2025-07-12 13:02:26.937453 | 2025-07-12 13:02:26.937845 | TASK [Point out that the following task takes some time and does not give any output] 2025-07-12 13:02:26.977093 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-07-12 13:02:26.987624 | 2025-07-12 13:02:26.987775 | TASK [Run manager part 0] 2025-07-12 13:02:28.345326 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-12 13:02:28.507932 | orchestrator | 2025-07-12 13:02:28.508004 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-07-12 13:02:28.508014 | orchestrator | 2025-07-12 13:02:28.508034 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-07-12 13:02:30.325461 | orchestrator | ok: [testbed-manager] 2025-07-12 13:02:30.325570 | orchestrator | 2025-07-12 13:02:30.325613 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-07-12 13:02:30.325633 | orchestrator | 2025-07-12 13:02:30.325678 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 13:02:32.198078 | orchestrator | ok: [testbed-manager] 2025-07-12 13:02:32.198294 | orchestrator | 2025-07-12 13:02:32.198315 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-07-12 13:02:32.897257 | orchestrator | ok: [testbed-manager] 2025-07-12 13:02:32.897362 | orchestrator | 2025-07-12 13:02:32.897378 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-07-12 13:02:32.957354 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:02:32.957419 | orchestrator | 2025-07-12 13:02:32.957431 | orchestrator | TASK [Update package cache] **************************************************** 2025-07-12 13:02:32.989964 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:02:32.990046 | orchestrator | 2025-07-12 13:02:32.990057 | orchestrator | TASK [Install required packages] *********************************************** 2025-07-12 13:02:33.019713 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:02:33.019764 | orchestrator | 2025-07-12 13:02:33.019771 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-07-12 13:02:33.047879 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:02:33.047926 | orchestrator | 2025-07-12 13:02:33.047932 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-07-12 13:02:33.076813 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:02:33.076859 | orchestrator | 2025-07-12 13:02:33.076866 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-07-12 13:02:33.107974 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:02:33.108061 | orchestrator | 2025-07-12 13:02:33.108071 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-07-12 13:02:33.136930 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:02:33.136966 | orchestrator | 2025-07-12 13:02:33.136973 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-07-12 13:02:33.952158 | orchestrator | changed: [testbed-manager] 2025-07-12 13:02:33.952266 | orchestrator | 2025-07-12 13:02:33.952284 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-07-12 13:05:53.598076 | orchestrator | changed: [testbed-manager] 2025-07-12 13:05:53.598185 | orchestrator | 2025-07-12 13:05:53.598215 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-07-12 13:07:20.319275 | orchestrator | changed: [testbed-manager] 2025-07-12 13:07:20.319368 | orchestrator | 2025-07-12 13:07:20.319385 | orchestrator | TASK [Install required packages] *********************************************** 2025-07-12 13:07:40.844096 | orchestrator | changed: [testbed-manager] 2025-07-12 13:07:40.844806 | orchestrator | 2025-07-12 13:07:40.844832 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-07-12 13:07:49.662678 | orchestrator | changed: [testbed-manager] 2025-07-12 13:07:49.662742 | orchestrator | 2025-07-12 13:07:49.662794 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-07-12 13:07:49.714243 | orchestrator | ok: [testbed-manager] 2025-07-12 13:07:49.714306 | orchestrator | 2025-07-12 13:07:49.714322 | orchestrator | TASK [Get current user] ******************************************************** 2025-07-12 13:07:50.504495 | orchestrator | ok: [testbed-manager] 2025-07-12 13:07:50.504576 | orchestrator | 2025-07-12 13:07:50.504601 | orchestrator | TASK [Create venv directory] *************************************************** 2025-07-12 13:07:51.251739 | orchestrator | changed: [testbed-manager] 2025-07-12 13:07:51.251827 | orchestrator | 2025-07-12 13:07:51.251845 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-07-12 13:07:57.759105 | orchestrator | changed: [testbed-manager] 2025-07-12 13:07:57.759996 | orchestrator | 2025-07-12 13:07:57.760094 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-07-12 13:08:03.903639 | orchestrator | changed: [testbed-manager] 2025-07-12 13:08:03.903847 | orchestrator | 2025-07-12 13:08:03.903872 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-07-12 13:08:06.650888 | orchestrator | changed: [testbed-manager] 2025-07-12 13:08:06.650962 | orchestrator | 2025-07-12 13:08:06.650977 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-07-12 13:08:08.422267 | orchestrator | changed: [testbed-manager] 2025-07-12 13:08:08.422315 | orchestrator | 2025-07-12 13:08:08.422326 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-07-12 13:08:09.559226 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-07-12 13:08:09.559275 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-07-12 13:08:09.559284 | orchestrator | 2025-07-12 13:08:09.559296 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-07-12 13:08:09.600071 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-07-12 13:08:09.600140 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-07-12 13:08:09.600152 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-07-12 13:08:09.600164 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-07-12 13:08:15.715281 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-07-12 13:08:15.715352 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-07-12 13:08:15.715362 | orchestrator | 2025-07-12 13:08:15.715371 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-07-12 13:08:16.303043 | orchestrator | changed: [testbed-manager] 2025-07-12 13:08:16.303117 | orchestrator | 2025-07-12 13:08:16.303129 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-07-12 13:08:36.497275 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-07-12 13:08:36.497345 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-07-12 13:08:36.497361 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-07-12 13:08:36.497374 | orchestrator | 2025-07-12 13:08:36.497387 | orchestrator | TASK [Install local collections] *********************************************** 2025-07-12 13:08:38.892156 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-07-12 13:08:38.892255 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-07-12 13:08:38.892272 | orchestrator | 2025-07-12 13:08:38.892286 | orchestrator | PLAY [Create operator user] **************************************************** 2025-07-12 13:08:38.892298 | orchestrator | 2025-07-12 13:08:38.892310 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 13:08:40.304838 | orchestrator | ok: [testbed-manager] 2025-07-12 13:08:40.304928 | orchestrator | 2025-07-12 13:08:40.304946 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-07-12 13:08:40.355442 | orchestrator | ok: [testbed-manager] 2025-07-12 13:08:40.355519 | orchestrator | 2025-07-12 13:08:40.355534 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-07-12 13:08:40.425763 | orchestrator | ok: [testbed-manager] 2025-07-12 13:08:40.425865 | orchestrator | 2025-07-12 13:08:40.425884 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-07-12 13:08:41.160543 | orchestrator | changed: [testbed-manager] 2025-07-12 13:08:41.160598 | orchestrator | 2025-07-12 13:08:41.160607 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-07-12 13:08:41.906829 | orchestrator | changed: [testbed-manager] 2025-07-12 13:08:41.906938 | orchestrator | 2025-07-12 13:08:41.906956 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-07-12 13:08:43.290784 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-07-12 13:08:43.290862 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-07-12 13:08:43.290871 | orchestrator | 2025-07-12 13:08:43.290887 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-07-12 13:08:44.709399 | orchestrator | changed: [testbed-manager] 2025-07-12 13:08:44.709458 | orchestrator | 2025-07-12 13:08:44.709468 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-07-12 13:08:46.519119 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 13:08:46.519207 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-07-12 13:08:46.519222 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-07-12 13:08:46.519234 | orchestrator | 2025-07-12 13:08:46.519248 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-07-12 13:08:46.578857 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:08:46.578945 | orchestrator | 2025-07-12 13:08:46.578960 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-07-12 13:08:47.154987 | orchestrator | changed: [testbed-manager] 2025-07-12 13:08:47.155030 | orchestrator | 2025-07-12 13:08:47.155040 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-07-12 13:08:47.214381 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:08:47.214426 | orchestrator | 2025-07-12 13:08:47.214434 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-07-12 13:08:48.064298 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-12 13:08:48.064376 | orchestrator | changed: [testbed-manager] 2025-07-12 13:08:48.064391 | orchestrator | 2025-07-12 13:08:48.064402 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-07-12 13:08:48.102264 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:08:48.102334 | orchestrator | 2025-07-12 13:08:48.102348 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-07-12 13:08:48.147222 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:08:48.147294 | orchestrator | 2025-07-12 13:08:48.147309 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-07-12 13:08:48.184599 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:08:48.184673 | orchestrator | 2025-07-12 13:08:48.184688 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-07-12 13:08:48.236188 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:08:48.236258 | orchestrator | 2025-07-12 13:08:48.236275 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-07-12 13:08:48.950595 | orchestrator | ok: [testbed-manager] 2025-07-12 13:08:48.950681 | orchestrator | 2025-07-12 13:08:48.950697 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-07-12 13:08:48.950710 | orchestrator | 2025-07-12 13:08:48.950722 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 13:08:50.386112 | orchestrator | ok: [testbed-manager] 2025-07-12 13:08:50.386201 | orchestrator | 2025-07-12 13:08:50.386217 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-07-12 13:08:51.391625 | orchestrator | changed: [testbed-manager] 2025-07-12 13:08:51.391695 | orchestrator | 2025-07-12 13:08:51.391710 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:08:51.391724 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-07-12 13:08:51.391736 | orchestrator | 2025-07-12 13:08:51.716985 | orchestrator | ok: Runtime: 0:06:24.217149 2025-07-12 13:08:51.735494 | 2025-07-12 13:08:51.735705 | TASK [Point out that the log in on the manager is now possible] 2025-07-12 13:08:51.775250 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-07-12 13:08:51.785204 | 2025-07-12 13:08:51.785339 | TASK [Point out that the following task takes some time and does not give any output] 2025-07-12 13:08:51.833338 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-07-12 13:08:51.844171 | 2025-07-12 13:08:51.844325 | TASK [Run manager part 1 + 2] 2025-07-12 13:08:52.701192 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-12 13:08:52.755766 | orchestrator | 2025-07-12 13:08:52.755861 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-07-12 13:08:52.755870 | orchestrator | 2025-07-12 13:08:52.755882 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 13:08:55.294641 | orchestrator | ok: [testbed-manager] 2025-07-12 13:08:55.294689 | orchestrator | 2025-07-12 13:08:55.294710 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-07-12 13:08:55.329190 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:08:55.329233 | orchestrator | 2025-07-12 13:08:55.329240 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-07-12 13:08:55.364976 | orchestrator | ok: [testbed-manager] 2025-07-12 13:08:55.365032 | orchestrator | 2025-07-12 13:08:55.365045 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-07-12 13:08:55.400149 | orchestrator | ok: [testbed-manager] 2025-07-12 13:08:55.400192 | orchestrator | 2025-07-12 13:08:55.400199 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-07-12 13:08:55.464366 | orchestrator | ok: [testbed-manager] 2025-07-12 13:08:55.464415 | orchestrator | 2025-07-12 13:08:55.464424 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-07-12 13:08:55.524957 | orchestrator | ok: [testbed-manager] 2025-07-12 13:08:55.525007 | orchestrator | 2025-07-12 13:08:55.525015 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-07-12 13:08:55.571790 | orchestrator | included: /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-07-12 13:08:55.571980 | orchestrator | 2025-07-12 13:08:55.571988 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-07-12 13:08:56.319292 | orchestrator | ok: [testbed-manager] 2025-07-12 13:08:56.319345 | orchestrator | 2025-07-12 13:08:56.319354 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-07-12 13:08:56.366556 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:08:56.366603 | orchestrator | 2025-07-12 13:08:56.366610 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-07-12 13:08:57.708274 | orchestrator | changed: [testbed-manager] 2025-07-12 13:08:57.708337 | orchestrator | 2025-07-12 13:08:57.708349 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-07-12 13:08:58.298552 | orchestrator | ok: [testbed-manager] 2025-07-12 13:08:58.298606 | orchestrator | 2025-07-12 13:08:58.298615 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-07-12 13:08:59.473946 | orchestrator | changed: [testbed-manager] 2025-07-12 13:08:59.474058 | orchestrator | 2025-07-12 13:08:59.474079 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-07-12 13:09:12.687680 | orchestrator | changed: [testbed-manager] 2025-07-12 13:09:12.687873 | orchestrator | 2025-07-12 13:09:12.687892 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-07-12 13:09:13.390595 | orchestrator | ok: [testbed-manager] 2025-07-12 13:09:13.390674 | orchestrator | 2025-07-12 13:09:13.390690 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-07-12 13:09:13.449988 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:09:13.450072 | orchestrator | 2025-07-12 13:09:13.450086 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-07-12 13:09:14.407706 | orchestrator | changed: [testbed-manager] 2025-07-12 13:09:14.407792 | orchestrator | 2025-07-12 13:09:14.407808 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-07-12 13:09:15.394677 | orchestrator | changed: [testbed-manager] 2025-07-12 13:09:15.394772 | orchestrator | 2025-07-12 13:09:15.394788 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-07-12 13:09:15.978650 | orchestrator | changed: [testbed-manager] 2025-07-12 13:09:15.978693 | orchestrator | 2025-07-12 13:09:15.978702 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-07-12 13:09:16.019702 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-07-12 13:09:16.019814 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-07-12 13:09:16.020011 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-07-12 13:09:16.020032 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-07-12 13:09:18.799295 | orchestrator | changed: [testbed-manager] 2025-07-12 13:09:18.799481 | orchestrator | 2025-07-12 13:09:18.799492 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-07-12 13:09:27.916585 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-07-12 13:09:27.916637 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-07-12 13:09:27.916647 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-07-12 13:09:27.916655 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-07-12 13:09:27.916665 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-07-12 13:09:27.916672 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-07-12 13:09:27.916679 | orchestrator | 2025-07-12 13:09:27.916687 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-07-12 13:09:28.984315 | orchestrator | changed: [testbed-manager] 2025-07-12 13:09:28.984366 | orchestrator | 2025-07-12 13:09:28.984379 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-07-12 13:09:29.029790 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:09:29.029858 | orchestrator | 2025-07-12 13:09:29.029867 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-07-12 13:09:32.108675 | orchestrator | changed: [testbed-manager] 2025-07-12 13:09:32.108742 | orchestrator | 2025-07-12 13:09:32.108756 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-07-12 13:09:32.153110 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:09:32.153172 | orchestrator | 2025-07-12 13:09:32.153180 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-07-12 13:11:11.980921 | orchestrator | changed: [testbed-manager] 2025-07-12 13:11:11.981016 | orchestrator | 2025-07-12 13:11:11.981035 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-07-12 13:11:13.131920 | orchestrator | ok: [testbed-manager] 2025-07-12 13:11:13.132129 | orchestrator | 2025-07-12 13:11:13.132149 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:11:13.132167 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-07-12 13:11:13.132189 | orchestrator | 2025-07-12 13:11:13.477195 | orchestrator | ok: Runtime: 0:02:21.051447 2025-07-12 13:11:13.494524 | 2025-07-12 13:11:13.494699 | TASK [Reboot manager] 2025-07-12 13:11:15.031285 | orchestrator | ok: Runtime: 0:00:00.977020 2025-07-12 13:11:15.048662 | 2025-07-12 13:11:15.048845 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-07-12 13:11:31.447285 | orchestrator | ok 2025-07-12 13:11:31.458434 | 2025-07-12 13:11:31.458613 | TASK [Wait a little longer for the manager so that everything is ready] 2025-07-12 13:12:31.507474 | orchestrator | ok 2025-07-12 13:12:31.517847 | 2025-07-12 13:12:31.518005 | TASK [Deploy manager + bootstrap nodes] 2025-07-12 13:12:33.978768 | orchestrator | 2025-07-12 13:12:33.979031 | orchestrator | # DEPLOY MANAGER 2025-07-12 13:12:33.979073 | orchestrator | 2025-07-12 13:12:33.979099 | orchestrator | + set -e 2025-07-12 13:12:33.979113 | orchestrator | + echo 2025-07-12 13:12:33.979127 | orchestrator | + echo '# DEPLOY MANAGER' 2025-07-12 13:12:33.979144 | orchestrator | + echo 2025-07-12 13:12:33.979196 | orchestrator | + cat /opt/manager-vars.sh 2025-07-12 13:12:33.983366 | orchestrator | export NUMBER_OF_NODES=6 2025-07-12 13:12:33.983392 | orchestrator | 2025-07-12 13:12:33.983406 | orchestrator | export CEPH_VERSION=reef 2025-07-12 13:12:33.983419 | orchestrator | export CONFIGURATION_VERSION=main 2025-07-12 13:12:33.983431 | orchestrator | export MANAGER_VERSION=9.2.0 2025-07-12 13:12:33.983453 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-07-12 13:12:33.983464 | orchestrator | 2025-07-12 13:12:33.983482 | orchestrator | export ARA=false 2025-07-12 13:12:33.983494 | orchestrator | export DEPLOY_MODE=manager 2025-07-12 13:12:33.983512 | orchestrator | export TEMPEST=false 2025-07-12 13:12:33.983523 | orchestrator | export IS_ZUUL=true 2025-07-12 13:12:33.983534 | orchestrator | 2025-07-12 13:12:33.983552 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2025-07-12 13:12:33.983564 | orchestrator | export EXTERNAL_API=false 2025-07-12 13:12:33.983575 | orchestrator | 2025-07-12 13:12:33.983586 | orchestrator | export IMAGE_USER=ubuntu 2025-07-12 13:12:33.983601 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-07-12 13:12:33.983612 | orchestrator | 2025-07-12 13:12:33.983623 | orchestrator | export CEPH_STACK=ceph-ansible 2025-07-12 13:12:33.983638 | orchestrator | 2025-07-12 13:12:33.983650 | orchestrator | + echo 2025-07-12 13:12:33.983665 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-12 13:12:33.984517 | orchestrator | ++ export INTERACTIVE=false 2025-07-12 13:12:33.984535 | orchestrator | ++ INTERACTIVE=false 2025-07-12 13:12:33.984548 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-12 13:12:33.984561 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-12 13:12:33.984741 | orchestrator | + source /opt/manager-vars.sh 2025-07-12 13:12:33.984756 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-12 13:12:33.984768 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-12 13:12:33.984778 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-12 13:12:33.984789 | orchestrator | ++ CEPH_VERSION=reef 2025-07-12 13:12:33.984800 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-12 13:12:33.984817 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-12 13:12:33.984828 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-07-12 13:12:33.984839 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-07-12 13:12:33.984855 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-12 13:12:33.984873 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-12 13:12:33.984885 | orchestrator | ++ export ARA=false 2025-07-12 13:12:33.984902 | orchestrator | ++ ARA=false 2025-07-12 13:12:33.984913 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-12 13:12:33.984924 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-12 13:12:33.984938 | orchestrator | ++ export TEMPEST=false 2025-07-12 13:12:33.984975 | orchestrator | ++ TEMPEST=false 2025-07-12 13:12:33.984993 | orchestrator | ++ export IS_ZUUL=true 2025-07-12 13:12:33.985004 | orchestrator | ++ IS_ZUUL=true 2025-07-12 13:12:33.985015 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2025-07-12 13:12:33.985029 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2025-07-12 13:12:33.985040 | orchestrator | ++ export EXTERNAL_API=false 2025-07-12 13:12:33.985051 | orchestrator | ++ EXTERNAL_API=false 2025-07-12 13:12:33.985062 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-12 13:12:33.985076 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-12 13:12:33.985094 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-12 13:12:33.985105 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-12 13:12:33.985116 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-12 13:12:33.985127 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-12 13:12:33.985142 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-07-12 13:12:34.048608 | orchestrator | + docker version 2025-07-12 13:12:34.308688 | orchestrator | Client: Docker Engine - Community 2025-07-12 13:12:34.308766 | orchestrator | Version: 27.5.1 2025-07-12 13:12:34.308774 | orchestrator | API version: 1.47 2025-07-12 13:12:34.308779 | orchestrator | Go version: go1.22.11 2025-07-12 13:12:34.308783 | orchestrator | Git commit: 9f9e405 2025-07-12 13:12:34.308787 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-07-12 13:12:34.308792 | orchestrator | OS/Arch: linux/amd64 2025-07-12 13:12:34.308796 | orchestrator | Context: default 2025-07-12 13:12:34.308800 | orchestrator | 2025-07-12 13:12:34.308804 | orchestrator | Server: Docker Engine - Community 2025-07-12 13:12:34.308808 | orchestrator | Engine: 2025-07-12 13:12:34.308812 | orchestrator | Version: 27.5.1 2025-07-12 13:12:34.308816 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-07-12 13:12:34.308840 | orchestrator | Go version: go1.22.11 2025-07-12 13:12:34.308844 | orchestrator | Git commit: 4c9b3b0 2025-07-12 13:12:34.308848 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-07-12 13:12:34.308851 | orchestrator | OS/Arch: linux/amd64 2025-07-12 13:12:34.308864 | orchestrator | Experimental: false 2025-07-12 13:12:34.308868 | orchestrator | containerd: 2025-07-12 13:12:34.308872 | orchestrator | Version: 1.7.27 2025-07-12 13:12:34.308876 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-07-12 13:12:34.308880 | orchestrator | runc: 2025-07-12 13:12:34.308884 | orchestrator | Version: 1.2.5 2025-07-12 13:12:34.308888 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-07-12 13:12:34.308892 | orchestrator | docker-init: 2025-07-12 13:12:34.308896 | orchestrator | Version: 0.19.0 2025-07-12 13:12:34.308900 | orchestrator | GitCommit: de40ad0 2025-07-12 13:12:34.315040 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-07-12 13:12:34.324297 | orchestrator | + set -e 2025-07-12 13:12:34.324356 | orchestrator | + source /opt/manager-vars.sh 2025-07-12 13:12:34.324377 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-12 13:12:34.324388 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-12 13:12:34.324400 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-12 13:12:34.324417 | orchestrator | ++ CEPH_VERSION=reef 2025-07-12 13:12:34.324429 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-12 13:12:34.324441 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-12 13:12:34.324459 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-07-12 13:12:34.324470 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-07-12 13:12:34.324481 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-12 13:12:34.324492 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-12 13:12:34.324518 | orchestrator | ++ export ARA=false 2025-07-12 13:12:34.324530 | orchestrator | ++ ARA=false 2025-07-12 13:12:34.324541 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-12 13:12:34.324551 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-12 13:12:34.324562 | orchestrator | ++ export TEMPEST=false 2025-07-12 13:12:34.324572 | orchestrator | ++ TEMPEST=false 2025-07-12 13:12:34.324583 | orchestrator | ++ export IS_ZUUL=true 2025-07-12 13:12:34.324594 | orchestrator | ++ IS_ZUUL=true 2025-07-12 13:12:34.324614 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2025-07-12 13:12:34.324625 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2025-07-12 13:12:34.324636 | orchestrator | ++ export EXTERNAL_API=false 2025-07-12 13:12:34.324647 | orchestrator | ++ EXTERNAL_API=false 2025-07-12 13:12:34.324664 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-12 13:12:34.324675 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-12 13:12:34.324686 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-12 13:12:34.324697 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-12 13:12:34.324707 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-12 13:12:34.324718 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-12 13:12:34.324742 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-12 13:12:34.324753 | orchestrator | ++ export INTERACTIVE=false 2025-07-12 13:12:34.324773 | orchestrator | ++ INTERACTIVE=false 2025-07-12 13:12:34.324784 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-12 13:12:34.324799 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-12 13:12:34.324815 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-07-12 13:12:34.324826 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.2.0 2025-07-12 13:12:34.332160 | orchestrator | + set -e 2025-07-12 13:12:34.332185 | orchestrator | + VERSION=9.2.0 2025-07-12 13:12:34.332198 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.2.0/g' /opt/configuration/environments/manager/configuration.yml 2025-07-12 13:12:34.341431 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-07-12 13:12:34.341457 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-07-12 13:12:34.348270 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-07-12 13:12:34.352890 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-07-12 13:12:34.360369 | orchestrator | /opt/configuration ~ 2025-07-12 13:12:34.360402 | orchestrator | + set -e 2025-07-12 13:12:34.360414 | orchestrator | + pushd /opt/configuration 2025-07-12 13:12:34.360425 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-12 13:12:34.363571 | orchestrator | + source /opt/venv/bin/activate 2025-07-12 13:12:34.365061 | orchestrator | ++ deactivate nondestructive 2025-07-12 13:12:34.365137 | orchestrator | ++ '[' -n '' ']' 2025-07-12 13:12:34.365153 | orchestrator | ++ '[' -n '' ']' 2025-07-12 13:12:34.365191 | orchestrator | ++ hash -r 2025-07-12 13:12:34.365203 | orchestrator | ++ '[' -n '' ']' 2025-07-12 13:12:34.365214 | orchestrator | ++ unset VIRTUAL_ENV 2025-07-12 13:12:34.365224 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-07-12 13:12:34.365236 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-07-12 13:12:34.365248 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-07-12 13:12:34.365258 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-07-12 13:12:34.365269 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-07-12 13:12:34.365280 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-07-12 13:12:34.365291 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-12 13:12:34.365303 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-12 13:12:34.365314 | orchestrator | ++ export PATH 2025-07-12 13:12:34.365325 | orchestrator | ++ '[' -n '' ']' 2025-07-12 13:12:34.365336 | orchestrator | ++ '[' -z '' ']' 2025-07-12 13:12:34.365347 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-07-12 13:12:34.365357 | orchestrator | ++ PS1='(venv) ' 2025-07-12 13:12:34.365368 | orchestrator | ++ export PS1 2025-07-12 13:12:34.365379 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-07-12 13:12:34.365389 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-07-12 13:12:34.365400 | orchestrator | ++ hash -r 2025-07-12 13:12:34.365412 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-07-12 13:12:35.435648 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-07-12 13:12:35.435773 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.4) 2025-07-12 13:12:35.437135 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-07-12 13:12:35.438441 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-07-12 13:12:35.439477 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-07-12 13:12:35.449489 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.2.1) 2025-07-12 13:12:35.450801 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-07-12 13:12:35.452090 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-07-12 13:12:35.453387 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-07-12 13:12:35.485942 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.2) 2025-07-12 13:12:35.487386 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-07-12 13:12:35.489435 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.5.0) 2025-07-12 13:12:35.490748 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.7.9) 2025-07-12 13:12:35.495200 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-07-12 13:12:35.710206 | orchestrator | ++ which gilt 2025-07-12 13:12:35.713208 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-07-12 13:12:35.713260 | orchestrator | + /opt/venv/bin/gilt overlay 2025-07-12 13:12:35.958538 | orchestrator | osism.cfg-generics: 2025-07-12 13:12:36.126147 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-07-12 13:12:36.126288 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-07-12 13:12:36.126560 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-07-12 13:12:36.126684 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-07-12 13:12:36.830922 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-07-12 13:12:36.842339 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-07-12 13:12:37.178672 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-07-12 13:12:37.228091 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-12 13:12:37.228176 | orchestrator | + deactivate 2025-07-12 13:12:37.228193 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-07-12 13:12:37.228206 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-12 13:12:37.228217 | orchestrator | + export PATH 2025-07-12 13:12:37.228228 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-07-12 13:12:37.228240 | orchestrator | + '[' -n '' ']' 2025-07-12 13:12:37.228253 | orchestrator | + hash -r 2025-07-12 13:12:37.228264 | orchestrator | + '[' -n '' ']' 2025-07-12 13:12:37.228275 | orchestrator | + unset VIRTUAL_ENV 2025-07-12 13:12:37.228285 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-07-12 13:12:37.228309 | orchestrator | ~ 2025-07-12 13:12:37.228321 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-07-12 13:12:37.228332 | orchestrator | + unset -f deactivate 2025-07-12 13:12:37.228343 | orchestrator | + popd 2025-07-12 13:12:37.229945 | orchestrator | + [[ 9.2.0 == \l\a\t\e\s\t ]] 2025-07-12 13:12:37.229988 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-07-12 13:12:37.230486 | orchestrator | ++ semver 9.2.0 7.0.0 2025-07-12 13:12:37.280039 | orchestrator | + [[ 1 -ge 0 ]] 2025-07-12 13:12:37.280093 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-07-12 13:12:37.280109 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-07-12 13:12:37.350820 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-12 13:12:37.350900 | orchestrator | + source /opt/venv/bin/activate 2025-07-12 13:12:37.350913 | orchestrator | ++ deactivate nondestructive 2025-07-12 13:12:37.350933 | orchestrator | ++ '[' -n '' ']' 2025-07-12 13:12:37.350945 | orchestrator | ++ '[' -n '' ']' 2025-07-12 13:12:37.351148 | orchestrator | ++ hash -r 2025-07-12 13:12:37.351173 | orchestrator | ++ '[' -n '' ']' 2025-07-12 13:12:37.351185 | orchestrator | ++ unset VIRTUAL_ENV 2025-07-12 13:12:37.351195 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-07-12 13:12:37.351211 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-07-12 13:12:37.351298 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-07-12 13:12:37.351319 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-07-12 13:12:37.351334 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-07-12 13:12:37.351345 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-07-12 13:12:37.351453 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-12 13:12:37.351479 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-12 13:12:37.351562 | orchestrator | ++ export PATH 2025-07-12 13:12:37.351586 | orchestrator | ++ '[' -n '' ']' 2025-07-12 13:12:37.351601 | orchestrator | ++ '[' -z '' ']' 2025-07-12 13:12:37.351662 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-07-12 13:12:37.351683 | orchestrator | ++ PS1='(venv) ' 2025-07-12 13:12:37.351694 | orchestrator | ++ export PS1 2025-07-12 13:12:37.351709 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-07-12 13:12:37.351730 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-07-12 13:12:37.351792 | orchestrator | ++ hash -r 2025-07-12 13:12:37.351986 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-07-12 13:12:38.450878 | orchestrator | 2025-07-12 13:12:38.451077 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-07-12 13:12:38.451099 | orchestrator | 2025-07-12 13:12:38.451111 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-07-12 13:12:39.027941 | orchestrator | ok: [testbed-manager] 2025-07-12 13:12:39.028066 | orchestrator | 2025-07-12 13:12:39.028077 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-07-12 13:12:40.008656 | orchestrator | changed: [testbed-manager] 2025-07-12 13:12:40.008760 | orchestrator | 2025-07-12 13:12:40.008776 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-07-12 13:12:40.008789 | orchestrator | 2025-07-12 13:12:40.008800 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 13:12:42.388767 | orchestrator | ok: [testbed-manager] 2025-07-12 13:12:42.388878 | orchestrator | 2025-07-12 13:12:42.388895 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-07-12 13:12:42.443723 | orchestrator | ok: [testbed-manager] 2025-07-12 13:12:42.443787 | orchestrator | 2025-07-12 13:12:42.443801 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-07-12 13:12:42.914720 | orchestrator | changed: [testbed-manager] 2025-07-12 13:12:42.914825 | orchestrator | 2025-07-12 13:12:42.914843 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-07-12 13:12:42.960648 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:12:42.960691 | orchestrator | 2025-07-12 13:12:42.960704 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-07-12 13:12:43.295263 | orchestrator | changed: [testbed-manager] 2025-07-12 13:12:43.295351 | orchestrator | 2025-07-12 13:12:43.295364 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-07-12 13:12:43.350475 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:12:43.350578 | orchestrator | 2025-07-12 13:12:43.350596 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-07-12 13:12:43.676196 | orchestrator | ok: [testbed-manager] 2025-07-12 13:12:43.676294 | orchestrator | 2025-07-12 13:12:43.676308 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-07-12 13:12:43.799470 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:12:43.799563 | orchestrator | 2025-07-12 13:12:43.799577 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-07-12 13:12:43.799589 | orchestrator | 2025-07-12 13:12:43.799601 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 13:12:45.670208 | orchestrator | ok: [testbed-manager] 2025-07-12 13:12:45.670322 | orchestrator | 2025-07-12 13:12:45.670340 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-07-12 13:12:45.775023 | orchestrator | included: osism.services.traefik for testbed-manager 2025-07-12 13:12:45.775124 | orchestrator | 2025-07-12 13:12:45.775139 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-07-12 13:12:45.829411 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-07-12 13:12:45.829476 | orchestrator | 2025-07-12 13:12:45.829487 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-07-12 13:12:46.996174 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-07-12 13:12:46.996283 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-07-12 13:12:46.996302 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-07-12 13:12:46.996314 | orchestrator | 2025-07-12 13:12:46.996327 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-07-12 13:12:48.812133 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-07-12 13:12:48.812247 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-07-12 13:12:48.812263 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-07-12 13:12:48.812275 | orchestrator | 2025-07-12 13:12:48.812288 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-07-12 13:12:49.460562 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-12 13:12:49.460671 | orchestrator | changed: [testbed-manager] 2025-07-12 13:12:49.460688 | orchestrator | 2025-07-12 13:12:49.460701 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-07-12 13:12:50.130408 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-12 13:12:50.130538 | orchestrator | changed: [testbed-manager] 2025-07-12 13:12:50.130564 | orchestrator | 2025-07-12 13:12:50.130586 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-07-12 13:12:50.192257 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:12:50.192345 | orchestrator | 2025-07-12 13:12:50.192359 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-07-12 13:12:50.567613 | orchestrator | ok: [testbed-manager] 2025-07-12 13:12:50.567718 | orchestrator | 2025-07-12 13:12:50.567735 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-07-12 13:12:50.646671 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-07-12 13:12:50.646753 | orchestrator | 2025-07-12 13:12:50.646767 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-07-12 13:12:51.751501 | orchestrator | changed: [testbed-manager] 2025-07-12 13:12:51.751619 | orchestrator | 2025-07-12 13:12:51.751648 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-07-12 13:12:52.545808 | orchestrator | changed: [testbed-manager] 2025-07-12 13:12:52.545912 | orchestrator | 2025-07-12 13:12:52.545929 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-07-12 13:13:04.575089 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:04.575235 | orchestrator | 2025-07-12 13:13:04.575291 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-07-12 13:13:04.623732 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:13:04.623825 | orchestrator | 2025-07-12 13:13:04.623843 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-07-12 13:13:04.623858 | orchestrator | 2025-07-12 13:13:04.623870 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 13:13:06.455574 | orchestrator | ok: [testbed-manager] 2025-07-12 13:13:06.455678 | orchestrator | 2025-07-12 13:13:06.455695 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-07-12 13:13:06.567837 | orchestrator | included: osism.services.manager for testbed-manager 2025-07-12 13:13:06.567949 | orchestrator | 2025-07-12 13:13:06.568018 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-07-12 13:13:06.627363 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-07-12 13:13:06.627446 | orchestrator | 2025-07-12 13:13:06.627462 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-07-12 13:13:09.257065 | orchestrator | ok: [testbed-manager] 2025-07-12 13:13:09.257171 | orchestrator | 2025-07-12 13:13:09.257187 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-07-12 13:13:09.315927 | orchestrator | ok: [testbed-manager] 2025-07-12 13:13:09.316035 | orchestrator | 2025-07-12 13:13:09.316049 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-07-12 13:13:09.445472 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-07-12 13:13:09.445561 | orchestrator | 2025-07-12 13:13:09.445577 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-07-12 13:13:12.342356 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-07-12 13:13:12.342473 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-07-12 13:13:12.342493 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-07-12 13:13:12.342513 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-07-12 13:13:12.342532 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-07-12 13:13:12.342549 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-07-12 13:13:12.342567 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-07-12 13:13:12.342585 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-07-12 13:13:12.342603 | orchestrator | 2025-07-12 13:13:12.342626 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-07-12 13:13:13.009248 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:13.009353 | orchestrator | 2025-07-12 13:13:13.009369 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-07-12 13:13:13.644960 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:13.645105 | orchestrator | 2025-07-12 13:13:13.645122 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-07-12 13:13:13.728954 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-07-12 13:13:13.729124 | orchestrator | 2025-07-12 13:13:13.729141 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-07-12 13:13:14.989665 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-07-12 13:13:14.989763 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-07-12 13:13:14.989778 | orchestrator | 2025-07-12 13:13:14.989791 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-07-12 13:13:15.615193 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:15.615265 | orchestrator | 2025-07-12 13:13:15.615279 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-07-12 13:13:15.674005 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:13:15.674187 | orchestrator | 2025-07-12 13:13:15.674207 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-07-12 13:13:15.739800 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-07-12 13:13:15.739881 | orchestrator | 2025-07-12 13:13:15.739895 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-07-12 13:13:17.125362 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-12 13:13:17.125467 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-12 13:13:17.125483 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:17.125498 | orchestrator | 2025-07-12 13:13:17.125510 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-07-12 13:13:17.762824 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:17.762909 | orchestrator | 2025-07-12 13:13:17.762924 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-07-12 13:13:17.824828 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:13:17.824890 | orchestrator | 2025-07-12 13:13:17.824906 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-07-12 13:13:17.927565 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-07-12 13:13:17.927650 | orchestrator | 2025-07-12 13:13:17.927665 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-07-12 13:13:18.463214 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:18.463311 | orchestrator | 2025-07-12 13:13:18.463333 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-07-12 13:13:18.864557 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:18.864657 | orchestrator | 2025-07-12 13:13:18.864673 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-07-12 13:13:20.139207 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-07-12 13:13:20.139313 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-07-12 13:13:20.139328 | orchestrator | 2025-07-12 13:13:20.139340 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-07-12 13:13:20.822345 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:20.822448 | orchestrator | 2025-07-12 13:13:20.822472 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-07-12 13:13:21.239830 | orchestrator | ok: [testbed-manager] 2025-07-12 13:13:21.239933 | orchestrator | 2025-07-12 13:13:21.239949 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-07-12 13:13:21.603281 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:21.603367 | orchestrator | 2025-07-12 13:13:21.603382 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-07-12 13:13:21.650418 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:13:21.650523 | orchestrator | 2025-07-12 13:13:21.650541 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-07-12 13:13:21.738078 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-07-12 13:13:21.738150 | orchestrator | 2025-07-12 13:13:21.738164 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-07-12 13:13:21.783139 | orchestrator | ok: [testbed-manager] 2025-07-12 13:13:21.783227 | orchestrator | 2025-07-12 13:13:21.783245 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-07-12 13:13:23.827101 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-07-12 13:13:23.827228 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-07-12 13:13:23.827257 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-07-12 13:13:23.827277 | orchestrator | 2025-07-12 13:13:23.827298 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-07-12 13:13:24.552966 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:24.553128 | orchestrator | 2025-07-12 13:13:24.553145 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-07-12 13:13:25.274521 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:25.274624 | orchestrator | 2025-07-12 13:13:25.274640 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-07-12 13:13:26.013957 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:26.014152 | orchestrator | 2025-07-12 13:13:26.014168 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-07-12 13:13:26.087315 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-07-12 13:13:26.087398 | orchestrator | 2025-07-12 13:13:26.087412 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-07-12 13:13:26.135929 | orchestrator | ok: [testbed-manager] 2025-07-12 13:13:26.136019 | orchestrator | 2025-07-12 13:13:26.136033 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-07-12 13:13:26.883054 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-07-12 13:13:26.883160 | orchestrator | 2025-07-12 13:13:26.883174 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-07-12 13:13:26.972556 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-07-12 13:13:26.972664 | orchestrator | 2025-07-12 13:13:26.972685 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-07-12 13:13:27.729886 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:27.730097 | orchestrator | 2025-07-12 13:13:27.730118 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-07-12 13:13:28.337958 | orchestrator | ok: [testbed-manager] 2025-07-12 13:13:28.338189 | orchestrator | 2025-07-12 13:13:28.338209 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-07-12 13:13:28.397551 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:13:28.397638 | orchestrator | 2025-07-12 13:13:28.397653 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-07-12 13:13:28.451337 | orchestrator | ok: [testbed-manager] 2025-07-12 13:13:28.451421 | orchestrator | 2025-07-12 13:13:28.451436 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-07-12 13:13:29.310536 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:29.310647 | orchestrator | 2025-07-12 13:13:29.310664 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-07-12 13:14:37.968661 | orchestrator | changed: [testbed-manager] 2025-07-12 13:14:37.968786 | orchestrator | 2025-07-12 13:14:37.968804 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-07-12 13:14:38.969267 | orchestrator | ok: [testbed-manager] 2025-07-12 13:14:38.969378 | orchestrator | 2025-07-12 13:14:38.969396 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-07-12 13:14:39.024868 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:14:39.024956 | orchestrator | 2025-07-12 13:14:39.024969 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-07-12 13:14:41.782794 | orchestrator | changed: [testbed-manager] 2025-07-12 13:14:41.782904 | orchestrator | 2025-07-12 13:14:41.782923 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-07-12 13:14:41.832747 | orchestrator | ok: [testbed-manager] 2025-07-12 13:14:41.832832 | orchestrator | 2025-07-12 13:14:41.832846 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-07-12 13:14:41.832859 | orchestrator | 2025-07-12 13:14:41.832870 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-07-12 13:14:41.892816 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:14:41.892878 | orchestrator | 2025-07-12 13:14:41.892923 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-07-12 13:15:41.943749 | orchestrator | Pausing for 60 seconds 2025-07-12 13:15:41.943867 | orchestrator | changed: [testbed-manager] 2025-07-12 13:15:41.943885 | orchestrator | 2025-07-12 13:15:41.943898 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-07-12 13:15:46.081838 | orchestrator | changed: [testbed-manager] 2025-07-12 13:15:46.081939 | orchestrator | 2025-07-12 13:15:46.081955 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-07-12 13:16:27.926383 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-07-12 13:16:27.926511 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-07-12 13:16:27.926528 | orchestrator | changed: [testbed-manager] 2025-07-12 13:16:27.926543 | orchestrator | 2025-07-12 13:16:27.926555 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-07-12 13:16:37.644756 | orchestrator | changed: [testbed-manager] 2025-07-12 13:16:37.644875 | orchestrator | 2025-07-12 13:16:37.644916 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-07-12 13:16:37.732371 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-07-12 13:16:37.732457 | orchestrator | 2025-07-12 13:16:37.732472 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-07-12 13:16:37.732484 | orchestrator | 2025-07-12 13:16:37.732496 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-07-12 13:16:37.778806 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:16:37.778886 | orchestrator | 2025-07-12 13:16:37.778900 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:16:37.778913 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-07-12 13:16:37.778924 | orchestrator | 2025-07-12 13:16:37.878214 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-12 13:16:37.878283 | orchestrator | + deactivate 2025-07-12 13:16:37.878298 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-07-12 13:16:37.878311 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-12 13:16:37.878322 | orchestrator | + export PATH 2025-07-12 13:16:37.878339 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-07-12 13:16:37.878351 | orchestrator | + '[' -n '' ']' 2025-07-12 13:16:37.878362 | orchestrator | + hash -r 2025-07-12 13:16:37.878374 | orchestrator | + '[' -n '' ']' 2025-07-12 13:16:37.878384 | orchestrator | + unset VIRTUAL_ENV 2025-07-12 13:16:37.878395 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-07-12 13:16:37.878406 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-07-12 13:16:37.878417 | orchestrator | + unset -f deactivate 2025-07-12 13:16:37.878429 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-07-12 13:16:37.886691 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-07-12 13:16:37.886726 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-07-12 13:16:37.886827 | orchestrator | + local max_attempts=60 2025-07-12 13:16:37.886845 | orchestrator | + local name=ceph-ansible 2025-07-12 13:16:37.886857 | orchestrator | + local attempt_num=1 2025-07-12 13:16:37.887393 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 13:16:37.927638 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-12 13:16:37.927685 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-07-12 13:16:37.927707 | orchestrator | + local max_attempts=60 2025-07-12 13:16:37.927728 | orchestrator | + local name=kolla-ansible 2025-07-12 13:16:37.927745 | orchestrator | + local attempt_num=1 2025-07-12 13:16:37.928446 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-07-12 13:16:37.966315 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-12 13:16:37.966358 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-07-12 13:16:37.966370 | orchestrator | + local max_attempts=60 2025-07-12 13:16:37.966381 | orchestrator | + local name=osism-ansible 2025-07-12 13:16:37.966392 | orchestrator | + local attempt_num=1 2025-07-12 13:16:37.967581 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-07-12 13:16:37.999409 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-12 13:16:37.999460 | orchestrator | + [[ true == \t\r\u\e ]] 2025-07-12 13:16:37.999472 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-07-12 13:16:38.775465 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-07-12 13:16:39.023280 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-07-12 13:16:39.023400 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20250711.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-07-12 13:16:39.023430 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20250711.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-07-12 13:16:39.023449 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-07-12 13:16:39.023462 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-07-12 13:16:39.023473 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-07-12 13:16:39.023483 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-07-12 13:16:39.023492 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20250711.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 53 seconds (healthy) 2025-07-12 13:16:39.023502 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-07-12 13:16:39.023511 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-07-12 13:16:39.023520 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-07-12 13:16:39.023530 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-07-12 13:16:39.023539 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20250711.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-07-12 13:16:39.023548 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20250711.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-07-12 13:16:39.023558 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-07-12 13:16:39.031887 | orchestrator | ++ semver 9.2.0 7.0.0 2025-07-12 13:16:39.087241 | orchestrator | + [[ 1 -ge 0 ]] 2025-07-12 13:16:39.087327 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-07-12 13:16:39.092066 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-07-12 13:16:51.244460 | orchestrator | 2025-07-12 13:16:51 | INFO  | Task 8824531e-54ba-47e1-9bff-43e1b7a25f30 (resolvconf) was prepared for execution. 2025-07-12 13:16:51.244622 | orchestrator | 2025-07-12 13:16:51 | INFO  | It takes a moment until task 8824531e-54ba-47e1-9bff-43e1b7a25f30 (resolvconf) has been started and output is visible here. 2025-07-12 13:17:04.967729 | orchestrator | 2025-07-12 13:17:04.967848 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-07-12 13:17:04.967864 | orchestrator | 2025-07-12 13:17:04.967876 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 13:17:04.967888 | orchestrator | Saturday 12 July 2025 13:16:55 +0000 (0:00:00.153) 0:00:00.153 ********* 2025-07-12 13:17:04.967900 | orchestrator | ok: [testbed-manager] 2025-07-12 13:17:04.967912 | orchestrator | 2025-07-12 13:17:04.967923 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-07-12 13:17:04.967935 | orchestrator | Saturday 12 July 2025 13:16:58 +0000 (0:00:03.703) 0:00:03.857 ********* 2025-07-12 13:17:04.967945 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:17:04.967957 | orchestrator | 2025-07-12 13:17:04.967968 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-07-12 13:17:04.967979 | orchestrator | Saturday 12 July 2025 13:16:58 +0000 (0:00:00.070) 0:00:03.928 ********* 2025-07-12 13:17:04.967990 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-07-12 13:17:04.968002 | orchestrator | 2025-07-12 13:17:04.968013 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-07-12 13:17:04.968024 | orchestrator | Saturday 12 July 2025 13:16:59 +0000 (0:00:00.097) 0:00:04.026 ********* 2025-07-12 13:17:04.968035 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-07-12 13:17:04.968046 | orchestrator | 2025-07-12 13:17:04.968057 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-07-12 13:17:04.968067 | orchestrator | Saturday 12 July 2025 13:16:59 +0000 (0:00:00.069) 0:00:04.095 ********* 2025-07-12 13:17:04.968078 | orchestrator | ok: [testbed-manager] 2025-07-12 13:17:04.968122 | orchestrator | 2025-07-12 13:17:04.968134 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-07-12 13:17:04.968144 | orchestrator | Saturday 12 July 2025 13:17:00 +0000 (0:00:01.078) 0:00:05.174 ********* 2025-07-12 13:17:04.968155 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:17:04.968166 | orchestrator | 2025-07-12 13:17:04.968176 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-07-12 13:17:04.968187 | orchestrator | Saturday 12 July 2025 13:17:00 +0000 (0:00:00.065) 0:00:05.240 ********* 2025-07-12 13:17:04.968198 | orchestrator | ok: [testbed-manager] 2025-07-12 13:17:04.968209 | orchestrator | 2025-07-12 13:17:04.968219 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-07-12 13:17:04.968230 | orchestrator | Saturday 12 July 2025 13:17:00 +0000 (0:00:00.550) 0:00:05.790 ********* 2025-07-12 13:17:04.968241 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:17:04.968255 | orchestrator | 2025-07-12 13:17:04.968267 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-07-12 13:17:04.968281 | orchestrator | Saturday 12 July 2025 13:17:00 +0000 (0:00:00.089) 0:00:05.880 ********* 2025-07-12 13:17:04.968293 | orchestrator | changed: [testbed-manager] 2025-07-12 13:17:04.968306 | orchestrator | 2025-07-12 13:17:04.968318 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-07-12 13:17:04.968330 | orchestrator | Saturday 12 July 2025 13:17:01 +0000 (0:00:00.500) 0:00:06.380 ********* 2025-07-12 13:17:04.968342 | orchestrator | changed: [testbed-manager] 2025-07-12 13:17:04.968354 | orchestrator | 2025-07-12 13:17:04.968367 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-07-12 13:17:04.968379 | orchestrator | Saturday 12 July 2025 13:17:02 +0000 (0:00:01.089) 0:00:07.470 ********* 2025-07-12 13:17:04.968418 | orchestrator | ok: [testbed-manager] 2025-07-12 13:17:04.968431 | orchestrator | 2025-07-12 13:17:04.968444 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-07-12 13:17:04.968456 | orchestrator | Saturday 12 July 2025 13:17:03 +0000 (0:00:00.969) 0:00:08.440 ********* 2025-07-12 13:17:04.968469 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-07-12 13:17:04.968502 | orchestrator | 2025-07-12 13:17:04.968515 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-07-12 13:17:04.968527 | orchestrator | Saturday 12 July 2025 13:17:03 +0000 (0:00:00.086) 0:00:08.526 ********* 2025-07-12 13:17:04.968539 | orchestrator | changed: [testbed-manager] 2025-07-12 13:17:04.968551 | orchestrator | 2025-07-12 13:17:04.968575 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:17:04.968589 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 13:17:04.968601 | orchestrator | 2025-07-12 13:17:04.968612 | orchestrator | 2025-07-12 13:17:04.968623 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:17:04.968633 | orchestrator | Saturday 12 July 2025 13:17:04 +0000 (0:00:01.164) 0:00:09.691 ********* 2025-07-12 13:17:04.968644 | orchestrator | =============================================================================== 2025-07-12 13:17:04.968655 | orchestrator | Gathering Facts --------------------------------------------------------- 3.70s 2025-07-12 13:17:04.968665 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.16s 2025-07-12 13:17:04.968676 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.09s 2025-07-12 13:17:04.968686 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.08s 2025-07-12 13:17:04.968697 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.97s 2025-07-12 13:17:04.968707 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.55s 2025-07-12 13:17:04.968735 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.50s 2025-07-12 13:17:04.968746 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.10s 2025-07-12 13:17:04.968757 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2025-07-12 13:17:04.968767 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-07-12 13:17:04.968778 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-07-12 13:17:04.968788 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2025-07-12 13:17:04.968799 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2025-07-12 13:17:05.243604 | orchestrator | + osism apply sshconfig 2025-07-12 13:17:17.205968 | orchestrator | 2025-07-12 13:17:17 | INFO  | Task 1faec21c-fe34-40a4-8a09-68f85d0afb17 (sshconfig) was prepared for execution. 2025-07-12 13:17:17.206237 | orchestrator | 2025-07-12 13:17:17 | INFO  | It takes a moment until task 1faec21c-fe34-40a4-8a09-68f85d0afb17 (sshconfig) has been started and output is visible here. 2025-07-12 13:17:29.158636 | orchestrator | 2025-07-12 13:17:29.158757 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-07-12 13:17:29.158775 | orchestrator | 2025-07-12 13:17:29.158788 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-07-12 13:17:29.158799 | orchestrator | Saturday 12 July 2025 13:17:21 +0000 (0:00:00.165) 0:00:00.165 ********* 2025-07-12 13:17:29.158810 | orchestrator | ok: [testbed-manager] 2025-07-12 13:17:29.158823 | orchestrator | 2025-07-12 13:17:29.158834 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-07-12 13:17:29.158872 | orchestrator | Saturday 12 July 2025 13:17:21 +0000 (0:00:00.634) 0:00:00.800 ********* 2025-07-12 13:17:29.158883 | orchestrator | changed: [testbed-manager] 2025-07-12 13:17:29.158895 | orchestrator | 2025-07-12 13:17:29.158905 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-07-12 13:17:29.158916 | orchestrator | Saturday 12 July 2025 13:17:22 +0000 (0:00:00.525) 0:00:01.325 ********* 2025-07-12 13:17:29.158926 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-07-12 13:17:29.158937 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-07-12 13:17:29.158948 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-07-12 13:17:29.158958 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-07-12 13:17:29.158969 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-07-12 13:17:29.158979 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-07-12 13:17:29.158989 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-07-12 13:17:29.158999 | orchestrator | 2025-07-12 13:17:29.159010 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-07-12 13:17:29.159020 | orchestrator | Saturday 12 July 2025 13:17:28 +0000 (0:00:05.822) 0:00:07.148 ********* 2025-07-12 13:17:29.159031 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:17:29.159041 | orchestrator | 2025-07-12 13:17:29.159052 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-07-12 13:17:29.159062 | orchestrator | Saturday 12 July 2025 13:17:28 +0000 (0:00:00.069) 0:00:07.217 ********* 2025-07-12 13:17:29.159073 | orchestrator | changed: [testbed-manager] 2025-07-12 13:17:29.159083 | orchestrator | 2025-07-12 13:17:29.159093 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:17:29.159207 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 13:17:29.159222 | orchestrator | 2025-07-12 13:17:29.159234 | orchestrator | 2025-07-12 13:17:29.159246 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:17:29.159258 | orchestrator | Saturday 12 July 2025 13:17:28 +0000 (0:00:00.620) 0:00:07.837 ********* 2025-07-12 13:17:29.159271 | orchestrator | =============================================================================== 2025-07-12 13:17:29.159282 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.82s 2025-07-12 13:17:29.159295 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.63s 2025-07-12 13:17:29.159308 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.62s 2025-07-12 13:17:29.159326 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.53s 2025-07-12 13:17:29.159345 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-07-12 13:17:29.442114 | orchestrator | + osism apply known-hosts 2025-07-12 13:17:41.349711 | orchestrator | 2025-07-12 13:17:41 | INFO  | Task 15631753-06b1-43d2-bd66-48f19eb5dba8 (known-hosts) was prepared for execution. 2025-07-12 13:17:41.349825 | orchestrator | 2025-07-12 13:17:41 | INFO  | It takes a moment until task 15631753-06b1-43d2-bd66-48f19eb5dba8 (known-hosts) has been started and output is visible here. 2025-07-12 13:17:58.160796 | orchestrator | 2025-07-12 13:17:58.160932 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-07-12 13:17:58.160948 | orchestrator | 2025-07-12 13:17:58.160959 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-07-12 13:17:58.160970 | orchestrator | Saturday 12 July 2025 13:17:45 +0000 (0:00:00.170) 0:00:00.170 ********* 2025-07-12 13:17:58.160980 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-07-12 13:17:58.160990 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-07-12 13:17:58.161000 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-07-12 13:17:58.161010 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-07-12 13:17:58.161037 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-07-12 13:17:58.161047 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-07-12 13:17:58.161056 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-07-12 13:17:58.161065 | orchestrator | 2025-07-12 13:17:58.161075 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-07-12 13:17:58.161085 | orchestrator | Saturday 12 July 2025 13:17:51 +0000 (0:00:06.121) 0:00:06.292 ********* 2025-07-12 13:17:58.161096 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-07-12 13:17:58.161134 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-07-12 13:17:58.161145 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-07-12 13:17:58.161154 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-07-12 13:17:58.161164 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-07-12 13:17:58.161174 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-07-12 13:17:58.161183 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-07-12 13:17:58.161193 | orchestrator | 2025-07-12 13:17:58.161202 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 13:17:58.161212 | orchestrator | Saturday 12 July 2025 13:17:51 +0000 (0:00:00.172) 0:00:06.464 ********* 2025-07-12 13:17:58.161226 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCWqdkOt6QSYWVACAGGIUwaviRnIgBkV5H9GDo0kPY54GYhZHC/w6RZjpsP/+KpFLfel8AqAO5gp0MPh+OP+yHXDMoMbNtcYelQNFqJczg5/ZZtorMCA5vLjxUf1gF8zetrLvFu1Z7Awb7yHCR+M2hjILhMajtWI/mOOIe2EvIkEyp4AhM42oh7F/ccYl+uSi1vkiJN6/OI1SgR+OVfnwluJS4rvXj7H8b5nCfo/ZQUKjx2VeHkgJF10t5ri2vG0AC35lDTyL0d+dYMDUykqwOgpJJuSaGN8BiPMBijEtbKwLkJVJ0w2o0l+eegN7gCT4hCxiLBHCY1KuWMqYs2Yj2v2ByI/klJcne1fT+QngnKR06VO6N9QTe4ubzKeFeynz6cthBzlz8Mp+9Cx/YNwONs4jGU9IQUY9ueHLzTIpXEGUnHkpIIjpovbDw0hG3W9cAUOHBe7ltymJJtIjSL53e8k/llVrtW7zceitBeV1Bf6yo9w9EloPS1ZjNYniNR720=) 2025-07-12 13:17:58.161293 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM90WSwB/0hjR/a5pHYG5MWaQrjYc00KAda5dPXKZyZNIk7DXTDtNT9p0v87afBokcaGky5F4Ot3AhD8lZg2hzo=) 2025-07-12 13:17:58.161307 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHOGtVdYscqQyQ4x1t7h0pqjEhfr34eS3VMHm9LMNsqA) 2025-07-12 13:17:58.161318 | orchestrator | 2025-07-12 13:17:58.161327 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 13:17:58.161337 | orchestrator | Saturday 12 July 2025 13:17:52 +0000 (0:00:01.173) 0:00:07.638 ********* 2025-07-12 13:17:58.161347 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDo4pHI3eVHKTxt6Q/sWJ1SXcJZG3ncsvBNhisEkH9/nwngzX5lc7q9iGRAyRKtyruwl4Gb9bTmHAhAN8hGKD3Y=) 2025-07-12 13:17:58.161357 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILIixSVoPGqxnWHDZDFyCvix2sSBLjHfYa3gzXfFutgj) 2025-07-12 13:17:58.161398 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8Zmsl8ThHZH/ai+F/z7TsfzS9VRF6AhOvQ94K1wPgvCowB3WTYX3KOwZ4UEux0s6M//x/UYIAviyOAr5jtNARzlfcCa1qrEUnb7oFZttIobvk22T3L3Bq0KeglUNNVIOrvO8ndAZzvLZHATGP+96J94GTC6N5jYnna53QCybzrih+cgncjHNsg8dWqSynvNQe1qURskJeQAreW6pJc6HutTH+Ler7pJQgQbS2rWz+E8UVW7L/YFqwlHK7INmBdMLjxX0eXjKx8yb/EtCdrfTjjzLjPnnXzg7ZaI6mfqXfRkvAroAAAsnMknYbMDknR/rHWnoFPl4ZwKKyi1v3VF6AckWNtNzyoXwSC4KCZqMEjqVtvQiJHzR/kNAAgVZULZmlelOXS+FqVSWFt6M5zzUZvd6RlEqZ1COoKXNs4GME5lZqkCnHPQOzA3fxwjuRjf3TLNjBe6y6wsb93ruaberFMTiJ4UHUn8k5Wu6eiOu+04pCWsoPp0dxrkM2+5dqsyU=) 2025-07-12 13:17:58.161409 | orchestrator | 2025-07-12 13:17:58.161419 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 13:17:58.161429 | orchestrator | Saturday 12 July 2025 13:17:53 +0000 (0:00:01.093) 0:00:08.732 ********* 2025-07-12 13:17:58.161439 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1sgV3f9Z41WkaJQra9j+KoQKJ8EkOaokDU56WZo/+mAXppcGL6/SxnS4Sr9WzD2THqJUjZmYvPp1gY8Jg1BwT/7zEnJl938mTj1pN3RZtsaXxr20O883yoaEteJNIvdrmTIIRmJmkcZHlFZNs0xVI9aspvJBEEbQ6DJa+0hzXc0fZI5MVJosyMWFAfcyUQCCk5lgM0j7Co6YgiH4UZNZgRvUPS5wVsUeZaC8zdG8WixwNyaGYNuFDX8WfMNSety2E2rPvohxLiGvgjyFXJSsaytDUZYceEq8bayhGaxVAmrN/7jESgGLbzI5mwzMN3e/GXK4NQlIu7qicxU4nI08KSqefEiJdGGrWu+asyGc97yxiIMpnBYprfNspkbMVXPP/xnsSn4Py54lZ5hpaHHmLpdkYmmcSA1+vpUITF0dUZHg2b1PKf6DrO7raKswYkNDGJYhy+UsCpbK9QHW8a+tjYJidwnJFrSz5JaPw0Br1dCS5r+LGbhjwYke8cKQYYWc=) 2025-07-12 13:17:58.161449 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPqTClFRoZ7A9HWRSZtNFnsguIom2EUNSH38AKWNSNhSB8/+KjKuxN6GgDHg4HUnL4NZUsHZOhpBLU4flbnWBb8=) 2025-07-12 13:17:58.161459 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHVeQ4irA6d0I79kR4qR4j2o0JaC1t86bZkh8AUluxk3) 2025-07-12 13:17:58.161469 | orchestrator | 2025-07-12 13:17:58.161479 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 13:17:58.161489 | orchestrator | Saturday 12 July 2025 13:17:54 +0000 (0:00:01.093) 0:00:09.826 ********* 2025-07-12 13:17:58.161498 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHGhBVSpk+pAO+8M+8wKOy3nbyN4Qaj1k2Kao7aALpaC) 2025-07-12 13:17:58.161509 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEmqTXN+wZkD0Uw8z3WIFLehuuM+IKSWLmQvRizp+H2yZuY3XndDXv0QHjwsilXZyoEzkPbQNuc2XLA06QmhfHO9fMHvh9g2wdvvLMIg+EDOS0MZRAkERYly+Pz1UXHMXch8nCUOAmjGe77OSS3Ea+WQ4ywRM6GPCMb7mOu/KMFpURdhGnJW/20hckEKYfMoIsT0ytkFENO6LkncZz0zAIXVCXs83DqR/5mvnftqi9XP6PO4yOrhTHiDeUVRyWgaiJRBghOVXFnWXZIsuRjC/Q7hdSlXFMMb3+DvZ4hcW/bVUy/Jx2PcGrjDfCs7x9HV0FuKCh+/dRWwaJltplb0/CTQZCM28+sjyJVrL5jeCzVISY+zkVA10jTrnWzs5Y7zqE8WD6s1dyWiI9nDjb/TSBu3oTcJUvMxx1Q7SVnIP66A5mlEGEL3COVEXDQDIRqpNMBUSp3aMtAJRDN5hUADNPFL3dbnsYE8B7vSZ6JnsgUkyPlqUx8pCLV5weckzth/8=) 2025-07-12 13:17:58.161518 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDL3tYwh1ozbF+Jgw97A/5L0Mn5QzNDLYAkGI35PMg2SP5oVtREOb512shYQlGYc8NBGozAIFnAwZjK6D2ihYY4=) 2025-07-12 13:17:58.161528 | orchestrator | 2025-07-12 13:17:58.161538 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 13:17:58.161548 | orchestrator | Saturday 12 July 2025 13:17:55 +0000 (0:00:01.083) 0:00:10.909 ********* 2025-07-12 13:17:58.161558 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCz48dgd9Ei4nhQb4jihLB5y6R6RJ+KTv3yORP9a1SZpTWoVl0n0zcyHNK3z8wT0Jlbx8h16XiO8xxRfzj7KzKoAeTUmAk3HYNFy2Dgk4Bt1BBRJw2BmJZwHPkCP2Q3k8VHHQEjguM4F+C9ulTcYyJfMsjpe7AJ+I2ts9zEH7Rc331zKGZzsxdnXL2KGflkyVgnQxnmtyj0KNZU+u4zXoxuUC+gjoZcDoEuhMq3VqnLIK4ljbunOnAWO46VH+0TzHAQYg3sNNJvCcFADPAwhVN4MViqA8SvSvAei3n0fpE9i+mhAkJymkES0Xz6JeLVIaJU7FJEzOKOtY6SH0IhEBeegxgiZO7aXG72lbXyrk0475D0ArSYvbz7arKwTYzwCcR4rtegxP5whM04fQeCGaOvz0v66uEvLoM+HNPpj69x5h8BcOREA1xEry9ZSXLdcJy2YtIBgQesfH19X2n9DbZHHYKO9D0LXDFzW+XTSftG3/qGuJvLDHpbkiteRj5Mvks=) 2025-07-12 13:17:58.161579 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM8AgQRFb5LmR2R3zojzKeyFSgcuWDW++tkr1x1QByyqqRafNE7ohQmnvsvQLxG5lCz7M0XU9c2fQ8zXr4mPvKk=) 2025-07-12 13:17:58.161589 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA6jvw7lVADzsBP3q0T7aQqiofT6ZtfzPMX6dQgm4fmQ) 2025-07-12 13:17:58.161599 | orchestrator | 2025-07-12 13:17:58.161608 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 13:17:58.161618 | orchestrator | Saturday 12 July 2025 13:17:57 +0000 (0:00:01.073) 0:00:11.983 ********* 2025-07-12 13:17:58.161635 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAt77y41lodg/GkxIUYxViSPLWC24dDnLhHq2G5cQ6vGELpIsduu5wGk54s4w0nSPPoBnYvKhyq7ig4jmsYrRaQQMIxJelNowAKpkIjODliU3eAbYD7O44q7pJeQMc1+ce6vYkGA/dWptY/4tajke/TfX1au4LwW/cWE8seV9TbYt0fjjdqUXkKDtS8WYn82hvXCujHOaQ72kd1sdOOSulgMDpr2t8GnsysXDCbulrdv6B6UQkUp8siK46xYoaKoz8vsolBdY3dRi1dGEifyxWpMKg99M8THxZ6MAHoHCB3QZdx1sQceJ2F9lxslYjwaOSvu61+5BAV4U2tx20oxZOoQG5yxyrJEvEac9+fWOXlAUen0lTcnaTHieQbCpRhVH7yyJmVFFaJVRNQfVYu2HNkdwoPhyjpm3wAxabqp9DMr/TnSzuk5ZYIdZ1+aTTB1KcaXmQe9buA8gSHIY6d0koS71MWdpHQjCUO6EXq30bfWXrZsJbMfxt1q4L49Ex71E=) 2025-07-12 13:18:09.165258 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBLtmDRPOBEt3IZ8r6sFP/Zzjzu4uruVBkELB0vdbwn7h+s/LGmnGHIYXiGy3nybEXanIDH3SfKzJrwccvLe9fM=) 2025-07-12 13:18:09.165403 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDGwl7Homu3JEkhmqjgVXeLI4NopjxQtT4heDNysfJZt) 2025-07-12 13:18:09.165430 | orchestrator | 2025-07-12 13:18:09.165449 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 13:18:09.165470 | orchestrator | Saturday 12 July 2025 13:17:58 +0000 (0:00:01.080) 0:00:13.064 ********* 2025-07-12 13:18:09.165490 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjq16g1s5zvGCtY8dPi0g047e70PeWnX4Pq6EKnAanEhSFCBPIJAbN9uw7a3aWdeHVbA3jQNftMobLgsY21i8k1wu0QIc+Em5Cf+LuY5pbsw0up6XUbKEYE+mx6nG1JXNcNkvEsfXwGfTIBu1joO+zlc2omA0Ch8IDxmpRkErJA0sOwLKEfegn37fgHkB20W9a9qrOYS1zAgsWmv2vQ4AVQdU0T41jB9QulhPV00ZhsQ7jO0FY6kTuG3RZFdYKpieOhABaCDTC3WEQ/sRPuAxdqWwCVy8Xe2Bw0zS0DadOlNYrHnhPpnZKKsLe1mLxvcHj9MlE2oUqOr0dwSqexFJWavvzEzZhp9eYUCozpT0jVm90CfhwWmMp70j+/2RweHKcSgUUhROBOdq8nKX8x3pjXrEKXD5JFG9AhEBym4rKcRxPKocsAuB9+EviDNsvx9YxUyM5EEFBwEEqG3kXNT2SzNbWvdxQHAXQtN6ueZcpRGnEw/v9ase7EjwMXCc8kvM=) 2025-07-12 13:18:09.165511 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN94q+PEkZ2Uo8cDPSQCMEMKdscEFvaZDtSlOhWcxuaDpglU5qlHlgY1Ah5CRcVBExsVOY1qbaDGcyoJAhrsMFk=) 2025-07-12 13:18:09.165529 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPXCKJyCDyHrvcFUHlLh294u9jKQE60bSfH13f8yXrh2) 2025-07-12 13:18:09.165545 | orchestrator | 2025-07-12 13:18:09.165562 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-07-12 13:18:09.165581 | orchestrator | Saturday 12 July 2025 13:17:59 +0000 (0:00:01.112) 0:00:14.177 ********* 2025-07-12 13:18:09.165599 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-07-12 13:18:09.165618 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-07-12 13:18:09.165637 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-07-12 13:18:09.165654 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-07-12 13:18:09.165672 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-07-12 13:18:09.165690 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-07-12 13:18:09.165708 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-07-12 13:18:09.165726 | orchestrator | 2025-07-12 13:18:09.165781 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-07-12 13:18:09.165803 | orchestrator | Saturday 12 July 2025 13:18:04 +0000 (0:00:05.447) 0:00:19.624 ********* 2025-07-12 13:18:09.165824 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-07-12 13:18:09.165846 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-07-12 13:18:09.165865 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-07-12 13:18:09.165888 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-07-12 13:18:09.165916 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-07-12 13:18:09.165940 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-07-12 13:18:09.165966 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-07-12 13:18:09.165988 | orchestrator | 2025-07-12 13:18:09.166005 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 13:18:09.166096 | orchestrator | Saturday 12 July 2025 13:18:04 +0000 (0:00:00.177) 0:00:19.801 ********* 2025-07-12 13:18:09.166160 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHOGtVdYscqQyQ4x1t7h0pqjEhfr34eS3VMHm9LMNsqA) 2025-07-12 13:18:09.166295 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCWqdkOt6QSYWVACAGGIUwaviRnIgBkV5H9GDo0kPY54GYhZHC/w6RZjpsP/+KpFLfel8AqAO5gp0MPh+OP+yHXDMoMbNtcYelQNFqJczg5/ZZtorMCA5vLjxUf1gF8zetrLvFu1Z7Awb7yHCR+M2hjILhMajtWI/mOOIe2EvIkEyp4AhM42oh7F/ccYl+uSi1vkiJN6/OI1SgR+OVfnwluJS4rvXj7H8b5nCfo/ZQUKjx2VeHkgJF10t5ri2vG0AC35lDTyL0d+dYMDUykqwOgpJJuSaGN8BiPMBijEtbKwLkJVJ0w2o0l+eegN7gCT4hCxiLBHCY1KuWMqYs2Yj2v2ByI/klJcne1fT+QngnKR06VO6N9QTe4ubzKeFeynz6cthBzlz8Mp+9Cx/YNwONs4jGU9IQUY9ueHLzTIpXEGUnHkpIIjpovbDw0hG3W9cAUOHBe7ltymJJtIjSL53e8k/llVrtW7zceitBeV1Bf6yo9w9EloPS1ZjNYniNR720=) 2025-07-12 13:18:09.166324 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM90WSwB/0hjR/a5pHYG5MWaQrjYc00KAda5dPXKZyZNIk7DXTDtNT9p0v87afBokcaGky5F4Ot3AhD8lZg2hzo=) 2025-07-12 13:18:09.166344 | orchestrator | 2025-07-12 13:18:09.166364 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 13:18:09.166382 | orchestrator | Saturday 12 July 2025 13:18:05 +0000 (0:00:01.096) 0:00:20.897 ********* 2025-07-12 13:18:09.166400 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILIixSVoPGqxnWHDZDFyCvix2sSBLjHfYa3gzXfFutgj) 2025-07-12 13:18:09.166421 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8Zmsl8ThHZH/ai+F/z7TsfzS9VRF6AhOvQ94K1wPgvCowB3WTYX3KOwZ4UEux0s6M//x/UYIAviyOAr5jtNARzlfcCa1qrEUnb7oFZttIobvk22T3L3Bq0KeglUNNVIOrvO8ndAZzvLZHATGP+96J94GTC6N5jYnna53QCybzrih+cgncjHNsg8dWqSynvNQe1qURskJeQAreW6pJc6HutTH+Ler7pJQgQbS2rWz+E8UVW7L/YFqwlHK7INmBdMLjxX0eXjKx8yb/EtCdrfTjjzLjPnnXzg7ZaI6mfqXfRkvAroAAAsnMknYbMDknR/rHWnoFPl4ZwKKyi1v3VF6AckWNtNzyoXwSC4KCZqMEjqVtvQiJHzR/kNAAgVZULZmlelOXS+FqVSWFt6M5zzUZvd6RlEqZ1COoKXNs4GME5lZqkCnHPQOzA3fxwjuRjf3TLNjBe6y6wsb93ruaberFMTiJ4UHUn8k5Wu6eiOu+04pCWsoPp0dxrkM2+5dqsyU=) 2025-07-12 13:18:09.166460 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDo4pHI3eVHKTxt6Q/sWJ1SXcJZG3ncsvBNhisEkH9/nwngzX5lc7q9iGRAyRKtyruwl4Gb9bTmHAhAN8hGKD3Y=) 2025-07-12 13:18:09.166472 | orchestrator | 2025-07-12 13:18:09.166483 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 13:18:09.166494 | orchestrator | Saturday 12 July 2025 13:18:07 +0000 (0:00:01.075) 0:00:21.973 ********* 2025-07-12 13:18:09.166505 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHVeQ4irA6d0I79kR4qR4j2o0JaC1t86bZkh8AUluxk3) 2025-07-12 13:18:09.166516 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1sgV3f9Z41WkaJQra9j+KoQKJ8EkOaokDU56WZo/+mAXppcGL6/SxnS4Sr9WzD2THqJUjZmYvPp1gY8Jg1BwT/7zEnJl938mTj1pN3RZtsaXxr20O883yoaEteJNIvdrmTIIRmJmkcZHlFZNs0xVI9aspvJBEEbQ6DJa+0hzXc0fZI5MVJosyMWFAfcyUQCCk5lgM0j7Co6YgiH4UZNZgRvUPS5wVsUeZaC8zdG8WixwNyaGYNuFDX8WfMNSety2E2rPvohxLiGvgjyFXJSsaytDUZYceEq8bayhGaxVAmrN/7jESgGLbzI5mwzMN3e/GXK4NQlIu7qicxU4nI08KSqefEiJdGGrWu+asyGc97yxiIMpnBYprfNspkbMVXPP/xnsSn4Py54lZ5hpaHHmLpdkYmmcSA1+vpUITF0dUZHg2b1PKf6DrO7raKswYkNDGJYhy+UsCpbK9QHW8a+tjYJidwnJFrSz5JaPw0Br1dCS5r+LGbhjwYke8cKQYYWc=) 2025-07-12 13:18:09.166527 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPqTClFRoZ7A9HWRSZtNFnsguIom2EUNSH38AKWNSNhSB8/+KjKuxN6GgDHg4HUnL4NZUsHZOhpBLU4flbnWBb8=) 2025-07-12 13:18:09.166537 | orchestrator | 2025-07-12 13:18:09.166548 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 13:18:09.166558 | orchestrator | Saturday 12 July 2025 13:18:08 +0000 (0:00:01.016) 0:00:22.990 ********* 2025-07-12 13:18:09.166572 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHGhBVSpk+pAO+8M+8wKOy3nbyN4Qaj1k2Kao7aALpaC) 2025-07-12 13:18:09.166588 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEmqTXN+wZkD0Uw8z3WIFLehuuM+IKSWLmQvRizp+H2yZuY3XndDXv0QHjwsilXZyoEzkPbQNuc2XLA06QmhfHO9fMHvh9g2wdvvLMIg+EDOS0MZRAkERYly+Pz1UXHMXch8nCUOAmjGe77OSS3Ea+WQ4ywRM6GPCMb7mOu/KMFpURdhGnJW/20hckEKYfMoIsT0ytkFENO6LkncZz0zAIXVCXs83DqR/5mvnftqi9XP6PO4yOrhTHiDeUVRyWgaiJRBghOVXFnWXZIsuRjC/Q7hdSlXFMMb3+DvZ4hcW/bVUy/Jx2PcGrjDfCs7x9HV0FuKCh+/dRWwaJltplb0/CTQZCM28+sjyJVrL5jeCzVISY+zkVA10jTrnWzs5Y7zqE8WD6s1dyWiI9nDjb/TSBu3oTcJUvMxx1Q7SVnIP66A5mlEGEL3COVEXDQDIRqpNMBUSp3aMtAJRDN5hUADNPFL3dbnsYE8B7vSZ6JnsgUkyPlqUx8pCLV5weckzth/8=) 2025-07-12 13:18:09.166616 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDL3tYwh1ozbF+Jgw97A/5L0Mn5QzNDLYAkGI35PMg2SP5oVtREOb512shYQlGYc8NBGozAIFnAwZjK6D2ihYY4=) 2025-07-12 13:18:13.479601 | orchestrator | 2025-07-12 13:18:13.479707 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 13:18:13.479722 | orchestrator | Saturday 12 July 2025 13:18:09 +0000 (0:00:01.074) 0:00:24.064 ********* 2025-07-12 13:18:13.479735 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA6jvw7lVADzsBP3q0T7aQqiofT6ZtfzPMX6dQgm4fmQ) 2025-07-12 13:18:13.479751 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCz48dgd9Ei4nhQb4jihLB5y6R6RJ+KTv3yORP9a1SZpTWoVl0n0zcyHNK3z8wT0Jlbx8h16XiO8xxRfzj7KzKoAeTUmAk3HYNFy2Dgk4Bt1BBRJw2BmJZwHPkCP2Q3k8VHHQEjguM4F+C9ulTcYyJfMsjpe7AJ+I2ts9zEH7Rc331zKGZzsxdnXL2KGflkyVgnQxnmtyj0KNZU+u4zXoxuUC+gjoZcDoEuhMq3VqnLIK4ljbunOnAWO46VH+0TzHAQYg3sNNJvCcFADPAwhVN4MViqA8SvSvAei3n0fpE9i+mhAkJymkES0Xz6JeLVIaJU7FJEzOKOtY6SH0IhEBeegxgiZO7aXG72lbXyrk0475D0ArSYvbz7arKwTYzwCcR4rtegxP5whM04fQeCGaOvz0v66uEvLoM+HNPpj69x5h8BcOREA1xEry9ZSXLdcJy2YtIBgQesfH19X2n9DbZHHYKO9D0LXDFzW+XTSftG3/qGuJvLDHpbkiteRj5Mvks=) 2025-07-12 13:18:13.479766 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM8AgQRFb5LmR2R3zojzKeyFSgcuWDW++tkr1x1QByyqqRafNE7ohQmnvsvQLxG5lCz7M0XU9c2fQ8zXr4mPvKk=) 2025-07-12 13:18:13.479804 | orchestrator | 2025-07-12 13:18:13.479816 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 13:18:13.479826 | orchestrator | Saturday 12 July 2025 13:18:10 +0000 (0:00:01.107) 0:00:25.172 ********* 2025-07-12 13:18:13.479837 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBLtmDRPOBEt3IZ8r6sFP/Zzjzu4uruVBkELB0vdbwn7h+s/LGmnGHIYXiGy3nybEXanIDH3SfKzJrwccvLe9fM=) 2025-07-12 13:18:13.479848 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDGwl7Homu3JEkhmqjgVXeLI4NopjxQtT4heDNysfJZt) 2025-07-12 13:18:13.479877 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAt77y41lodg/GkxIUYxViSPLWC24dDnLhHq2G5cQ6vGELpIsduu5wGk54s4w0nSPPoBnYvKhyq7ig4jmsYrRaQQMIxJelNowAKpkIjODliU3eAbYD7O44q7pJeQMc1+ce6vYkGA/dWptY/4tajke/TfX1au4LwW/cWE8seV9TbYt0fjjdqUXkKDtS8WYn82hvXCujHOaQ72kd1sdOOSulgMDpr2t8GnsysXDCbulrdv6B6UQkUp8siK46xYoaKoz8vsolBdY3dRi1dGEifyxWpMKg99M8THxZ6MAHoHCB3QZdx1sQceJ2F9lxslYjwaOSvu61+5BAV4U2tx20oxZOoQG5yxyrJEvEac9+fWOXlAUen0lTcnaTHieQbCpRhVH7yyJmVFFaJVRNQfVYu2HNkdwoPhyjpm3wAxabqp9DMr/TnSzuk5ZYIdZ1+aTTB1KcaXmQe9buA8gSHIY6d0koS71MWdpHQjCUO6EXq30bfWXrZsJbMfxt1q4L49Ex71E=) 2025-07-12 13:18:13.479889 | orchestrator | 2025-07-12 13:18:13.479900 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 13:18:13.479910 | orchestrator | Saturday 12 July 2025 13:18:11 +0000 (0:00:01.144) 0:00:26.317 ********* 2025-07-12 13:18:13.479921 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPXCKJyCDyHrvcFUHlLh294u9jKQE60bSfH13f8yXrh2) 2025-07-12 13:18:13.479932 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjq16g1s5zvGCtY8dPi0g047e70PeWnX4Pq6EKnAanEhSFCBPIJAbN9uw7a3aWdeHVbA3jQNftMobLgsY21i8k1wu0QIc+Em5Cf+LuY5pbsw0up6XUbKEYE+mx6nG1JXNcNkvEsfXwGfTIBu1joO+zlc2omA0Ch8IDxmpRkErJA0sOwLKEfegn37fgHkB20W9a9qrOYS1zAgsWmv2vQ4AVQdU0T41jB9QulhPV00ZhsQ7jO0FY6kTuG3RZFdYKpieOhABaCDTC3WEQ/sRPuAxdqWwCVy8Xe2Bw0zS0DadOlNYrHnhPpnZKKsLe1mLxvcHj9MlE2oUqOr0dwSqexFJWavvzEzZhp9eYUCozpT0jVm90CfhwWmMp70j+/2RweHKcSgUUhROBOdq8nKX8x3pjXrEKXD5JFG9AhEBym4rKcRxPKocsAuB9+EviDNsvx9YxUyM5EEFBwEEqG3kXNT2SzNbWvdxQHAXQtN6ueZcpRGnEw/v9ase7EjwMXCc8kvM=) 2025-07-12 13:18:13.479943 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN94q+PEkZ2Uo8cDPSQCMEMKdscEFvaZDtSlOhWcxuaDpglU5qlHlgY1Ah5CRcVBExsVOY1qbaDGcyoJAhrsMFk=) 2025-07-12 13:18:13.479954 | orchestrator | 2025-07-12 13:18:13.479965 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-07-12 13:18:13.479975 | orchestrator | Saturday 12 July 2025 13:18:12 +0000 (0:00:01.042) 0:00:27.359 ********* 2025-07-12 13:18:13.479986 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-07-12 13:18:13.479997 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-07-12 13:18:13.480007 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-07-12 13:18:13.480018 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-07-12 13:18:13.480029 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-07-12 13:18:13.480039 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-07-12 13:18:13.480050 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-07-12 13:18:13.480061 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:18:13.480072 | orchestrator | 2025-07-12 13:18:13.480098 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-07-12 13:18:13.480110 | orchestrator | Saturday 12 July 2025 13:18:12 +0000 (0:00:00.156) 0:00:27.516 ********* 2025-07-12 13:18:13.480156 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:18:13.480187 | orchestrator | 2025-07-12 13:18:13.480206 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-07-12 13:18:13.480218 | orchestrator | Saturday 12 July 2025 13:18:12 +0000 (0:00:00.074) 0:00:27.590 ********* 2025-07-12 13:18:13.480230 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:18:13.480242 | orchestrator | 2025-07-12 13:18:13.480253 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-07-12 13:18:13.480265 | orchestrator | Saturday 12 July 2025 13:18:12 +0000 (0:00:00.054) 0:00:27.644 ********* 2025-07-12 13:18:13.480277 | orchestrator | changed: [testbed-manager] 2025-07-12 13:18:13.480288 | orchestrator | 2025-07-12 13:18:13.480300 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:18:13.480312 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 13:18:13.480325 | orchestrator | 2025-07-12 13:18:13.480336 | orchestrator | 2025-07-12 13:18:13.480348 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:18:13.480360 | orchestrator | Saturday 12 July 2025 13:18:13 +0000 (0:00:00.494) 0:00:28.139 ********* 2025-07-12 13:18:13.480372 | orchestrator | =============================================================================== 2025-07-12 13:18:13.480384 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.12s 2025-07-12 13:18:13.480396 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.45s 2025-07-12 13:18:13.480409 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2025-07-12 13:18:13.480420 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-07-12 13:18:13.480432 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-07-12 13:18:13.480444 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-07-12 13:18:13.480456 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-07-12 13:18:13.480468 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-07-12 13:18:13.480480 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-07-12 13:18:13.480491 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-07-12 13:18:13.480502 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-07-12 13:18:13.480513 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-07-12 13:18:13.480523 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-07-12 13:18:13.480534 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-07-12 13:18:13.480544 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-07-12 13:18:13.480555 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-07-12 13:18:13.480565 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.49s 2025-07-12 13:18:13.480576 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2025-07-12 13:18:13.480587 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2025-07-12 13:18:13.480598 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2025-07-12 13:18:13.749417 | orchestrator | + osism apply squid 2025-07-12 13:18:25.818421 | orchestrator | 2025-07-12 13:18:25 | INFO  | Task f8626568-16fc-4f08-b510-c5b80e0ed3ad (squid) was prepared for execution. 2025-07-12 13:18:25.818537 | orchestrator | 2025-07-12 13:18:25 | INFO  | It takes a moment until task f8626568-16fc-4f08-b510-c5b80e0ed3ad (squid) has been started and output is visible here. 2025-07-12 13:20:20.532880 | orchestrator | 2025-07-12 13:20:20.533004 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-07-12 13:20:20.533051 | orchestrator | 2025-07-12 13:20:20.533063 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-07-12 13:20:20.533075 | orchestrator | Saturday 12 July 2025 13:18:29 +0000 (0:00:00.168) 0:00:00.168 ********* 2025-07-12 13:20:20.533086 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-07-12 13:20:20.533097 | orchestrator | 2025-07-12 13:20:20.533114 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-07-12 13:20:20.533134 | orchestrator | Saturday 12 July 2025 13:18:29 +0000 (0:00:00.096) 0:00:00.265 ********* 2025-07-12 13:20:20.533152 | orchestrator | ok: [testbed-manager] 2025-07-12 13:20:20.533230 | orchestrator | 2025-07-12 13:20:20.533251 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-07-12 13:20:20.533270 | orchestrator | Saturday 12 July 2025 13:18:31 +0000 (0:00:01.440) 0:00:01.705 ********* 2025-07-12 13:20:20.533282 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-07-12 13:20:20.533292 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-07-12 13:20:20.533303 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-07-12 13:20:20.533314 | orchestrator | 2025-07-12 13:20:20.533325 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-07-12 13:20:20.533336 | orchestrator | Saturday 12 July 2025 13:18:32 +0000 (0:00:01.166) 0:00:02.872 ********* 2025-07-12 13:20:20.533347 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-07-12 13:20:20.533357 | orchestrator | 2025-07-12 13:20:20.533368 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-07-12 13:20:20.533379 | orchestrator | Saturday 12 July 2025 13:18:33 +0000 (0:00:01.068) 0:00:03.940 ********* 2025-07-12 13:20:20.533390 | orchestrator | ok: [testbed-manager] 2025-07-12 13:20:20.533400 | orchestrator | 2025-07-12 13:20:20.533411 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-07-12 13:20:20.533424 | orchestrator | Saturday 12 July 2025 13:18:33 +0000 (0:00:00.372) 0:00:04.313 ********* 2025-07-12 13:20:20.533436 | orchestrator | changed: [testbed-manager] 2025-07-12 13:20:20.533448 | orchestrator | 2025-07-12 13:20:20.533480 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-07-12 13:20:20.533493 | orchestrator | Saturday 12 July 2025 13:18:34 +0000 (0:00:00.940) 0:00:05.253 ********* 2025-07-12 13:20:20.533505 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-07-12 13:20:20.533518 | orchestrator | ok: [testbed-manager] 2025-07-12 13:20:20.533530 | orchestrator | 2025-07-12 13:20:20.533542 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-07-12 13:20:20.533554 | orchestrator | Saturday 12 July 2025 13:19:07 +0000 (0:00:32.447) 0:00:37.700 ********* 2025-07-12 13:20:20.533567 | orchestrator | changed: [testbed-manager] 2025-07-12 13:20:20.533579 | orchestrator | 2025-07-12 13:20:20.533590 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-07-12 13:20:20.533684 | orchestrator | Saturday 12 July 2025 13:19:19 +0000 (0:00:12.106) 0:00:49.807 ********* 2025-07-12 13:20:20.533706 | orchestrator | Pausing for 60 seconds 2025-07-12 13:20:20.533727 | orchestrator | changed: [testbed-manager] 2025-07-12 13:20:20.533746 | orchestrator | 2025-07-12 13:20:20.533766 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-07-12 13:20:20.533784 | orchestrator | Saturday 12 July 2025 13:20:19 +0000 (0:01:00.068) 0:01:49.875 ********* 2025-07-12 13:20:20.533801 | orchestrator | ok: [testbed-manager] 2025-07-12 13:20:20.533819 | orchestrator | 2025-07-12 13:20:20.533835 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-07-12 13:20:20.533853 | orchestrator | Saturday 12 July 2025 13:20:19 +0000 (0:00:00.064) 0:01:49.939 ********* 2025-07-12 13:20:20.533872 | orchestrator | changed: [testbed-manager] 2025-07-12 13:20:20.533890 | orchestrator | 2025-07-12 13:20:20.533922 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:20:20.533942 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:20:20.533962 | orchestrator | 2025-07-12 13:20:20.533982 | orchestrator | 2025-07-12 13:20:20.534000 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:20:20.534114 | orchestrator | Saturday 12 July 2025 13:20:20 +0000 (0:00:00.651) 0:01:50.591 ********* 2025-07-12 13:20:20.534138 | orchestrator | =============================================================================== 2025-07-12 13:20:20.534155 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-07-12 13:20:20.534195 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.45s 2025-07-12 13:20:20.534207 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.11s 2025-07-12 13:20:20.534217 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.44s 2025-07-12 13:20:20.534228 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.17s 2025-07-12 13:20:20.534239 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.07s 2025-07-12 13:20:20.534250 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.94s 2025-07-12 13:20:20.534260 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.65s 2025-07-12 13:20:20.534271 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2025-07-12 13:20:20.534281 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2025-07-12 13:20:20.534293 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2025-07-12 13:20:20.830851 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-07-12 13:20:20.830940 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-07-12 13:20:20.836228 | orchestrator | ++ semver 9.2.0 9.0.0 2025-07-12 13:20:20.906012 | orchestrator | + [[ 1 -lt 0 ]] 2025-07-12 13:20:20.906554 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-07-12 13:20:32.797167 | orchestrator | 2025-07-12 13:20:32 | INFO  | Task a8483c39-8e19-4768-8bb0-64c0f68265b4 (operator) was prepared for execution. 2025-07-12 13:20:32.797328 | orchestrator | 2025-07-12 13:20:32 | INFO  | It takes a moment until task a8483c39-8e19-4768-8bb0-64c0f68265b4 (operator) has been started and output is visible here. 2025-07-12 13:20:48.180887 | orchestrator | 2025-07-12 13:20:48.181005 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-07-12 13:20:48.181022 | orchestrator | 2025-07-12 13:20:48.181053 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 13:20:48.181065 | orchestrator | Saturday 12 July 2025 13:20:36 +0000 (0:00:00.150) 0:00:00.150 ********* 2025-07-12 13:20:48.181076 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:20:48.181088 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:20:48.181099 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:20:48.181109 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:20:48.181120 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:20:48.181130 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:20:48.181141 | orchestrator | 2025-07-12 13:20:48.181152 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-07-12 13:20:48.181162 | orchestrator | Saturday 12 July 2025 13:20:39 +0000 (0:00:03.258) 0:00:03.409 ********* 2025-07-12 13:20:48.181173 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:20:48.181218 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:20:48.181237 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:20:48.181249 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:20:48.181259 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:20:48.181270 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:20:48.181281 | orchestrator | 2025-07-12 13:20:48.181292 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-07-12 13:20:48.181327 | orchestrator | 2025-07-12 13:20:48.181339 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-07-12 13:20:48.181349 | orchestrator | Saturday 12 July 2025 13:20:40 +0000 (0:00:00.763) 0:00:04.172 ********* 2025-07-12 13:20:48.181360 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:20:48.181370 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:20:48.181380 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:20:48.181390 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:20:48.181400 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:20:48.181411 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:20:48.181423 | orchestrator | 2025-07-12 13:20:48.181435 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-07-12 13:20:48.181447 | orchestrator | Saturday 12 July 2025 13:20:40 +0000 (0:00:00.183) 0:00:04.356 ********* 2025-07-12 13:20:48.181459 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:20:48.181471 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:20:48.181483 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:20:48.181495 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:20:48.181506 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:20:48.181518 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:20:48.181530 | orchestrator | 2025-07-12 13:20:48.181541 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-07-12 13:20:48.181553 | orchestrator | Saturday 12 July 2025 13:20:41 +0000 (0:00:00.173) 0:00:04.529 ********* 2025-07-12 13:20:48.181565 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:20:48.181578 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:20:48.181590 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:20:48.181602 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:20:48.181614 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:20:48.181625 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:20:48.181637 | orchestrator | 2025-07-12 13:20:48.181649 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-07-12 13:20:48.181661 | orchestrator | Saturday 12 July 2025 13:20:41 +0000 (0:00:00.628) 0:00:05.158 ********* 2025-07-12 13:20:48.181673 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:20:48.181684 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:20:48.181695 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:20:48.181706 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:20:48.181718 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:20:48.181729 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:20:48.181741 | orchestrator | 2025-07-12 13:20:48.181753 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-07-12 13:20:48.181765 | orchestrator | Saturday 12 July 2025 13:20:42 +0000 (0:00:00.787) 0:00:05.946 ********* 2025-07-12 13:20:48.181776 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-07-12 13:20:48.181787 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-07-12 13:20:48.181797 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-07-12 13:20:48.181808 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-07-12 13:20:48.181818 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-07-12 13:20:48.181829 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-07-12 13:20:48.181839 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-07-12 13:20:48.181849 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-07-12 13:20:48.181859 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-07-12 13:20:48.181870 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-07-12 13:20:48.181880 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-07-12 13:20:48.181890 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-07-12 13:20:48.181901 | orchestrator | 2025-07-12 13:20:48.181911 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-07-12 13:20:48.181921 | orchestrator | Saturday 12 July 2025 13:20:43 +0000 (0:00:01.143) 0:00:07.089 ********* 2025-07-12 13:20:48.181933 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:20:48.181954 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:20:48.181974 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:20:48.181991 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:20:48.182008 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:20:48.182107 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:20:48.182128 | orchestrator | 2025-07-12 13:20:48.182140 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-07-12 13:20:48.182152 | orchestrator | Saturday 12 July 2025 13:20:44 +0000 (0:00:01.264) 0:00:08.354 ********* 2025-07-12 13:20:48.182162 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-07-12 13:20:48.182173 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-07-12 13:20:48.182209 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-07-12 13:20:48.182220 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 13:20:48.182250 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 13:20:48.182261 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 13:20:48.182272 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 13:20:48.182282 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 13:20:48.182309 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 13:20:48.182320 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-07-12 13:20:48.182331 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-07-12 13:20:48.182341 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-07-12 13:20:48.182352 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-07-12 13:20:48.182362 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-07-12 13:20:48.182372 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-07-12 13:20:48.182382 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-07-12 13:20:48.182398 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-07-12 13:20:48.182409 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-07-12 13:20:48.182419 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-07-12 13:20:48.182429 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-07-12 13:20:48.182440 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-07-12 13:20:48.182450 | orchestrator | 2025-07-12 13:20:48.182461 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-07-12 13:20:48.182472 | orchestrator | Saturday 12 July 2025 13:20:46 +0000 (0:00:01.221) 0:00:09.575 ********* 2025-07-12 13:20:48.182482 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:20:48.182493 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:20:48.182503 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:20:48.182514 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:20:48.182524 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:20:48.182534 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:20:48.182544 | orchestrator | 2025-07-12 13:20:48.182555 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-07-12 13:20:48.182566 | orchestrator | Saturday 12 July 2025 13:20:46 +0000 (0:00:00.159) 0:00:09.735 ********* 2025-07-12 13:20:48.182576 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:20:48.182598 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:20:48.182609 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:20:48.182619 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:20:48.182629 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:20:48.182640 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:20:48.182650 | orchestrator | 2025-07-12 13:20:48.182672 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-07-12 13:20:48.182683 | orchestrator | Saturday 12 July 2025 13:20:46 +0000 (0:00:00.600) 0:00:10.336 ********* 2025-07-12 13:20:48.182693 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:20:48.182704 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:20:48.182715 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:20:48.182725 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:20:48.182735 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:20:48.182746 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:20:48.182757 | orchestrator | 2025-07-12 13:20:48.182767 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-07-12 13:20:48.182778 | orchestrator | Saturday 12 July 2025 13:20:47 +0000 (0:00:00.176) 0:00:10.512 ********* 2025-07-12 13:20:48.182788 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 13:20:48.182799 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:20:48.182809 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-07-12 13:20:48.182819 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:20:48.182830 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 13:20:48.182840 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 13:20:48.182851 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:20:48.182861 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:20:48.182871 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 13:20:48.182882 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:20:48.182892 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-07-12 13:20:48.182902 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:20:48.182913 | orchestrator | 2025-07-12 13:20:48.182923 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-07-12 13:20:48.182934 | orchestrator | Saturday 12 July 2025 13:20:47 +0000 (0:00:00.714) 0:00:11.227 ********* 2025-07-12 13:20:48.182945 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:20:48.182955 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:20:48.182966 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:20:48.182976 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:20:48.182987 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:20:48.182997 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:20:48.183008 | orchestrator | 2025-07-12 13:20:48.183023 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-07-12 13:20:48.183042 | orchestrator | Saturday 12 July 2025 13:20:47 +0000 (0:00:00.136) 0:00:11.364 ********* 2025-07-12 13:20:48.183059 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:20:48.183076 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:20:48.183094 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:20:48.183111 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:20:48.183129 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:20:48.183147 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:20:48.183165 | orchestrator | 2025-07-12 13:20:48.183208 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-07-12 13:20:48.183227 | orchestrator | Saturday 12 July 2025 13:20:48 +0000 (0:00:00.133) 0:00:11.497 ********* 2025-07-12 13:20:48.183245 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:20:48.183263 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:20:48.183283 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:20:48.183300 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:20:48.183333 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:20:49.289029 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:20:49.289132 | orchestrator | 2025-07-12 13:20:49.289147 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-07-12 13:20:49.289161 | orchestrator | Saturday 12 July 2025 13:20:48 +0000 (0:00:00.156) 0:00:11.653 ********* 2025-07-12 13:20:49.289172 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:20:49.289213 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:20:49.289225 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:20:49.289262 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:20:49.289274 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:20:49.289291 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:20:49.289308 | orchestrator | 2025-07-12 13:20:49.289320 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-07-12 13:20:49.289330 | orchestrator | Saturday 12 July 2025 13:20:48 +0000 (0:00:00.655) 0:00:12.309 ********* 2025-07-12 13:20:49.289341 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:20:49.289353 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:20:49.289371 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:20:49.289383 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:20:49.289393 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:20:49.289404 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:20:49.289414 | orchestrator | 2025-07-12 13:20:49.289425 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:20:49.289437 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 13:20:49.289449 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 13:20:49.289459 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 13:20:49.289470 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 13:20:49.289481 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 13:20:49.289491 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 13:20:49.289502 | orchestrator | 2025-07-12 13:20:49.289512 | orchestrator | 2025-07-12 13:20:49.289523 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:20:49.289533 | orchestrator | Saturday 12 July 2025 13:20:49 +0000 (0:00:00.213) 0:00:12.522 ********* 2025-07-12 13:20:49.289544 | orchestrator | =============================================================================== 2025-07-12 13:20:49.289555 | orchestrator | Gathering Facts --------------------------------------------------------- 3.26s 2025-07-12 13:20:49.289565 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.26s 2025-07-12 13:20:49.289576 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.22s 2025-07-12 13:20:49.289588 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.14s 2025-07-12 13:20:49.289598 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.79s 2025-07-12 13:20:49.289609 | orchestrator | Do not require tty for all users ---------------------------------------- 0.76s 2025-07-12 13:20:49.289620 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.71s 2025-07-12 13:20:49.289631 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.66s 2025-07-12 13:20:49.289641 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.63s 2025-07-12 13:20:49.289652 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.60s 2025-07-12 13:20:49.289662 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.21s 2025-07-12 13:20:49.289673 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.18s 2025-07-12 13:20:49.289683 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2025-07-12 13:20:49.289694 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.17s 2025-07-12 13:20:49.289713 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.16s 2025-07-12 13:20:49.289724 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2025-07-12 13:20:49.289735 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2025-07-12 13:20:49.289745 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.13s 2025-07-12 13:20:49.555423 | orchestrator | + osism apply --environment custom facts 2025-07-12 13:20:51.308761 | orchestrator | 2025-07-12 13:20:51 | INFO  | Trying to run play facts in environment custom 2025-07-12 13:21:01.444509 | orchestrator | 2025-07-12 13:21:01 | INFO  | Task 3161f62d-7e37-496c-ace7-7ec22f49ae02 (facts) was prepared for execution. 2025-07-12 13:21:01.444617 | orchestrator | 2025-07-12 13:21:01 | INFO  | It takes a moment until task 3161f62d-7e37-496c-ace7-7ec22f49ae02 (facts) has been started and output is visible here. 2025-07-12 13:21:43.407158 | orchestrator | 2025-07-12 13:21:43.407310 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-07-12 13:21:43.407329 | orchestrator | 2025-07-12 13:21:43.407341 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-07-12 13:21:43.407353 | orchestrator | Saturday 12 July 2025 13:21:05 +0000 (0:00:00.089) 0:00:00.089 ********* 2025-07-12 13:21:43.407364 | orchestrator | ok: [testbed-manager] 2025-07-12 13:21:43.407377 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:21:43.407388 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:21:43.407439 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:21:43.407452 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:21:43.407463 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:21:43.407473 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:21:43.407485 | orchestrator | 2025-07-12 13:21:43.407496 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-07-12 13:21:43.407507 | orchestrator | Saturday 12 July 2025 13:21:06 +0000 (0:00:01.400) 0:00:01.489 ********* 2025-07-12 13:21:43.407518 | orchestrator | ok: [testbed-manager] 2025-07-12 13:21:43.407530 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:21:43.407542 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:21:43.407558 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:21:43.407569 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:21:43.407579 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:21:43.407590 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:21:43.407600 | orchestrator | 2025-07-12 13:21:43.407611 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-07-12 13:21:43.407622 | orchestrator | 2025-07-12 13:21:43.407633 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-07-12 13:21:43.407644 | orchestrator | Saturday 12 July 2025 13:21:08 +0000 (0:00:01.230) 0:00:02.720 ********* 2025-07-12 13:21:43.407655 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:21:43.407666 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:21:43.407677 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:21:43.407687 | orchestrator | 2025-07-12 13:21:43.407700 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-07-12 13:21:43.407713 | orchestrator | Saturday 12 July 2025 13:21:08 +0000 (0:00:00.104) 0:00:02.825 ********* 2025-07-12 13:21:43.407726 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:21:43.407738 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:21:43.407749 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:21:43.407761 | orchestrator | 2025-07-12 13:21:43.407774 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-07-12 13:21:43.407786 | orchestrator | Saturday 12 July 2025 13:21:08 +0000 (0:00:00.227) 0:00:03.052 ********* 2025-07-12 13:21:43.407798 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:21:43.407810 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:21:43.407822 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:21:43.407834 | orchestrator | 2025-07-12 13:21:43.407846 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-07-12 13:21:43.407880 | orchestrator | Saturday 12 July 2025 13:21:08 +0000 (0:00:00.194) 0:00:03.247 ********* 2025-07-12 13:21:43.407894 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:21:43.407907 | orchestrator | 2025-07-12 13:21:43.407918 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-07-12 13:21:43.407929 | orchestrator | Saturday 12 July 2025 13:21:08 +0000 (0:00:00.144) 0:00:03.392 ********* 2025-07-12 13:21:43.407940 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:21:43.407950 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:21:43.407961 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:21:43.407972 | orchestrator | 2025-07-12 13:21:43.407983 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-07-12 13:21:43.407993 | orchestrator | Saturday 12 July 2025 13:21:09 +0000 (0:00:00.435) 0:00:03.827 ********* 2025-07-12 13:21:43.408004 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:21:43.408015 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:21:43.408026 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:21:43.408036 | orchestrator | 2025-07-12 13:21:43.408047 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-07-12 13:21:43.408058 | orchestrator | Saturday 12 July 2025 13:21:09 +0000 (0:00:00.114) 0:00:03.941 ********* 2025-07-12 13:21:43.408069 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:21:43.408079 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:21:43.408090 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:21:43.408100 | orchestrator | 2025-07-12 13:21:43.408111 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-07-12 13:21:43.408122 | orchestrator | Saturday 12 July 2025 13:21:10 +0000 (0:00:01.221) 0:00:05.163 ********* 2025-07-12 13:21:43.408132 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:21:43.408144 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:21:43.408154 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:21:43.408165 | orchestrator | 2025-07-12 13:21:43.408176 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-07-12 13:21:43.408187 | orchestrator | Saturday 12 July 2025 13:21:10 +0000 (0:00:00.479) 0:00:05.642 ********* 2025-07-12 13:21:43.408197 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:21:43.408231 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:21:43.408242 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:21:43.408253 | orchestrator | 2025-07-12 13:21:43.408263 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-07-12 13:21:43.408274 | orchestrator | Saturday 12 July 2025 13:21:12 +0000 (0:00:01.143) 0:00:06.786 ********* 2025-07-12 13:21:43.408284 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:21:43.408295 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:21:43.408306 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:21:43.408316 | orchestrator | 2025-07-12 13:21:43.408327 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-07-12 13:21:43.408337 | orchestrator | Saturday 12 July 2025 13:21:26 +0000 (0:00:14.422) 0:00:21.208 ********* 2025-07-12 13:21:43.408348 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:21:43.408359 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:21:43.408370 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:21:43.408380 | orchestrator | 2025-07-12 13:21:43.408391 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-07-12 13:21:43.408419 | orchestrator | Saturday 12 July 2025 13:21:26 +0000 (0:00:00.112) 0:00:21.321 ********* 2025-07-12 13:21:43.408431 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:21:43.408441 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:21:43.408452 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:21:43.408463 | orchestrator | 2025-07-12 13:21:43.408474 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-07-12 13:21:43.408492 | orchestrator | Saturday 12 July 2025 13:21:34 +0000 (0:00:07.504) 0:00:28.826 ********* 2025-07-12 13:21:43.408503 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:21:43.408513 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:21:43.408524 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:21:43.408535 | orchestrator | 2025-07-12 13:21:43.408545 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-07-12 13:21:43.408556 | orchestrator | Saturday 12 July 2025 13:21:34 +0000 (0:00:00.434) 0:00:29.261 ********* 2025-07-12 13:21:43.408567 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-07-12 13:21:43.408578 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-07-12 13:21:43.408593 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-07-12 13:21:43.408605 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-07-12 13:21:43.408615 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-07-12 13:21:43.408626 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-07-12 13:21:43.408637 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-07-12 13:21:43.408647 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-07-12 13:21:43.408658 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-07-12 13:21:43.408669 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-07-12 13:21:43.408680 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-07-12 13:21:43.408690 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-07-12 13:21:43.408701 | orchestrator | 2025-07-12 13:21:43.408711 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-07-12 13:21:43.408722 | orchestrator | Saturday 12 July 2025 13:21:38 +0000 (0:00:03.478) 0:00:32.739 ********* 2025-07-12 13:21:43.408733 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:21:43.408743 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:21:43.408754 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:21:43.408765 | orchestrator | 2025-07-12 13:21:43.408775 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-12 13:21:43.408786 | orchestrator | 2025-07-12 13:21:43.408797 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 13:21:43.408807 | orchestrator | Saturday 12 July 2025 13:21:39 +0000 (0:00:01.373) 0:00:34.113 ********* 2025-07-12 13:21:43.408884 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:21:43.408898 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:21:43.408908 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:21:43.408919 | orchestrator | ok: [testbed-manager] 2025-07-12 13:21:43.408930 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:21:43.408940 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:21:43.408950 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:21:43.408961 | orchestrator | 2025-07-12 13:21:43.408971 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:21:43.408983 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:21:43.408994 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:21:43.409006 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:21:43.409017 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:21:43.409028 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:21:43.409039 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:21:43.409057 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:21:43.409068 | orchestrator | 2025-07-12 13:21:43.409079 | orchestrator | 2025-07-12 13:21:43.409089 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:21:43.409100 | orchestrator | Saturday 12 July 2025 13:21:43 +0000 (0:00:03.931) 0:00:38.044 ********* 2025-07-12 13:21:43.409111 | orchestrator | =============================================================================== 2025-07-12 13:21:43.409121 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.42s 2025-07-12 13:21:43.409132 | orchestrator | Install required packages (Debian) -------------------------------------- 7.50s 2025-07-12 13:21:43.409142 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.93s 2025-07-12 13:21:43.409153 | orchestrator | Copy fact files --------------------------------------------------------- 3.48s 2025-07-12 13:21:43.409163 | orchestrator | Create custom facts directory ------------------------------------------- 1.40s 2025-07-12 13:21:43.409174 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.37s 2025-07-12 13:21:43.409192 | orchestrator | Copy fact file ---------------------------------------------------------- 1.23s 2025-07-12 13:21:43.622132 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.22s 2025-07-12 13:21:43.622289 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.14s 2025-07-12 13:21:43.622304 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.48s 2025-07-12 13:21:43.622315 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.44s 2025-07-12 13:21:43.622326 | orchestrator | Create custom facts directory ------------------------------------------- 0.44s 2025-07-12 13:21:43.622336 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.23s 2025-07-12 13:21:43.622347 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.20s 2025-07-12 13:21:43.622358 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2025-07-12 13:21:43.622369 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2025-07-12 13:21:43.622380 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2025-07-12 13:21:43.622390 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2025-07-12 13:21:43.907353 | orchestrator | + osism apply bootstrap 2025-07-12 13:21:55.804177 | orchestrator | 2025-07-12 13:21:55 | INFO  | Task e7bd3455-fe64-420b-85eb-78ff0d88cc0b (bootstrap) was prepared for execution. 2025-07-12 13:21:55.804358 | orchestrator | 2025-07-12 13:21:55 | INFO  | It takes a moment until task e7bd3455-fe64-420b-85eb-78ff0d88cc0b (bootstrap) has been started and output is visible here. 2025-07-12 13:22:12.515050 | orchestrator | 2025-07-12 13:22:12.515168 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-07-12 13:22:12.515185 | orchestrator | 2025-07-12 13:22:12.515276 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-07-12 13:22:12.515292 | orchestrator | Saturday 12 July 2025 13:21:59 +0000 (0:00:00.168) 0:00:00.168 ********* 2025-07-12 13:22:12.515304 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:12.515316 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:22:12.515327 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:22:12.515338 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:22:12.515348 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:12.515359 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:12.515370 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:12.515380 | orchestrator | 2025-07-12 13:22:12.515391 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-12 13:22:12.515427 | orchestrator | 2025-07-12 13:22:12.515438 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 13:22:12.515450 | orchestrator | Saturday 12 July 2025 13:22:00 +0000 (0:00:00.283) 0:00:00.452 ********* 2025-07-12 13:22:12.515461 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:22:12.515472 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:22:12.515482 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:22:12.515493 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:12.515503 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:12.515514 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:12.515524 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:12.515534 | orchestrator | 2025-07-12 13:22:12.515545 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-07-12 13:22:12.515556 | orchestrator | 2025-07-12 13:22:12.515566 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 13:22:12.515577 | orchestrator | Saturday 12 July 2025 13:22:03 +0000 (0:00:03.646) 0:00:04.098 ********* 2025-07-12 13:22:12.515591 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-07-12 13:22:12.515603 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-07-12 13:22:12.515615 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-07-12 13:22:12.515627 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 13:22:12.515639 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-07-12 13:22:12.515650 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 13:22:12.515662 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-07-12 13:22:12.515675 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 13:22:12.515687 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-07-12 13:22:12.515699 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-07-12 13:22:12.515711 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-07-12 13:22:12.515724 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-07-12 13:22:12.515736 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-07-12 13:22:12.515747 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-07-12 13:22:12.515759 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-07-12 13:22:12.515771 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-07-12 13:22:12.515783 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-07-12 13:22:12.515795 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-07-12 13:22:12.515807 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-07-12 13:22:12.515818 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-07-12 13:22:12.515830 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-12 13:22:12.515842 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-07-12 13:22:12.515854 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-07-12 13:22:12.515866 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:22:12.515878 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-12 13:22:12.515890 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-07-12 13:22:12.515902 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-07-12 13:22:12.515914 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-07-12 13:22:12.515926 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:22:12.515937 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-07-12 13:22:12.515948 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-07-12 13:22:12.515959 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-07-12 13:22:12.515969 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-07-12 13:22:12.515988 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:22:12.515998 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-07-12 13:22:12.516009 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-12 13:22:12.516019 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-07-12 13:22:12.516030 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-07-12 13:22:12.516046 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 13:22:12.516057 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-07-12 13:22:12.516068 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-07-12 13:22:12.516078 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-07-12 13:22:12.516088 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 13:22:12.516099 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-07-12 13:22:12.516109 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-07-12 13:22:12.516120 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:22:12.516148 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-07-12 13:22:12.516160 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 13:22:12.516170 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:22:12.516181 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-07-12 13:22:12.516191 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-07-12 13:22:12.516201 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-07-12 13:22:12.516237 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-07-12 13:22:12.516250 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:22:12.516261 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-07-12 13:22:12.516271 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:22:12.516282 | orchestrator | 2025-07-12 13:22:12.516292 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-07-12 13:22:12.516303 | orchestrator | 2025-07-12 13:22:12.516313 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-07-12 13:22:12.516324 | orchestrator | Saturday 12 July 2025 13:22:04 +0000 (0:00:00.436) 0:00:04.535 ********* 2025-07-12 13:22:12.516335 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:12.516345 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:22:12.516355 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:12.516366 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:12.516376 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:22:12.516387 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:12.516397 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:22:12.516408 | orchestrator | 2025-07-12 13:22:12.516419 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-07-12 13:22:12.516429 | orchestrator | Saturday 12 July 2025 13:22:06 +0000 (0:00:02.203) 0:00:06.738 ********* 2025-07-12 13:22:12.516440 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:12.516451 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:12.516461 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:12.516472 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:22:12.516482 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:22:12.516493 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:12.516503 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:22:12.516514 | orchestrator | 2025-07-12 13:22:12.516524 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-07-12 13:22:12.516535 | orchestrator | Saturday 12 July 2025 13:22:07 +0000 (0:00:01.217) 0:00:07.956 ********* 2025-07-12 13:22:12.516546 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:22:12.516559 | orchestrator | 2025-07-12 13:22:12.516570 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-07-12 13:22:12.516589 | orchestrator | Saturday 12 July 2025 13:22:07 +0000 (0:00:00.282) 0:00:08.238 ********* 2025-07-12 13:22:12.516600 | orchestrator | changed: [testbed-manager] 2025-07-12 13:22:12.516611 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:22:12.516622 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:22:12.516633 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:22:12.516643 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:22:12.516653 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:22:12.516664 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:22:12.516674 | orchestrator | 2025-07-12 13:22:12.516685 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-07-12 13:22:12.516695 | orchestrator | Saturday 12 July 2025 13:22:10 +0000 (0:00:02.083) 0:00:10.321 ********* 2025-07-12 13:22:12.516706 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:22:12.516718 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:22:12.516730 | orchestrator | 2025-07-12 13:22:12.516741 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-07-12 13:22:12.516752 | orchestrator | Saturday 12 July 2025 13:22:10 +0000 (0:00:00.266) 0:00:10.588 ********* 2025-07-12 13:22:12.516762 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:22:12.516773 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:22:12.516783 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:22:12.516793 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:22:12.516804 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:22:12.516814 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:22:12.516825 | orchestrator | 2025-07-12 13:22:12.516835 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-07-12 13:22:12.516846 | orchestrator | Saturday 12 July 2025 13:22:11 +0000 (0:00:01.045) 0:00:11.633 ********* 2025-07-12 13:22:12.516857 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:22:12.516867 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:22:12.516877 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:22:12.516888 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:22:12.516898 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:22:12.516909 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:22:12.516919 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:22:12.516929 | orchestrator | 2025-07-12 13:22:12.516940 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-07-12 13:22:12.516951 | orchestrator | Saturday 12 July 2025 13:22:11 +0000 (0:00:00.566) 0:00:12.199 ********* 2025-07-12 13:22:12.516961 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:22:12.516972 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:22:12.516982 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:22:12.516992 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:22:12.517003 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:22:12.517013 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:22:12.517024 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:12.517035 | orchestrator | 2025-07-12 13:22:12.517045 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-07-12 13:22:12.517057 | orchestrator | Saturday 12 July 2025 13:22:12 +0000 (0:00:00.451) 0:00:12.651 ********* 2025-07-12 13:22:12.517067 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:22:12.517078 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:22:12.517096 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:22:24.542315 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:22:24.542431 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:22:24.542446 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:22:24.542457 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:22:24.542469 | orchestrator | 2025-07-12 13:22:24.542481 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-07-12 13:22:24.542519 | orchestrator | Saturday 12 July 2025 13:22:12 +0000 (0:00:00.230) 0:00:12.881 ********* 2025-07-12 13:22:24.542534 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:22:24.542562 | orchestrator | 2025-07-12 13:22:24.542573 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-07-12 13:22:24.542585 | orchestrator | Saturday 12 July 2025 13:22:12 +0000 (0:00:00.296) 0:00:13.178 ********* 2025-07-12 13:22:24.542595 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:22:24.542606 | orchestrator | 2025-07-12 13:22:24.542617 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-07-12 13:22:24.542628 | orchestrator | Saturday 12 July 2025 13:22:13 +0000 (0:00:00.300) 0:00:13.479 ********* 2025-07-12 13:22:24.542639 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:24.542651 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:22:24.542661 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:24.542672 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:22:24.542682 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:24.542692 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:22:24.542702 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:24.542713 | orchestrator | 2025-07-12 13:22:24.542724 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-07-12 13:22:24.542736 | orchestrator | Saturday 12 July 2025 13:22:14 +0000 (0:00:01.404) 0:00:14.884 ********* 2025-07-12 13:22:24.542746 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:22:24.542757 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:22:24.542767 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:22:24.542778 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:22:24.542788 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:22:24.542798 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:22:24.542809 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:22:24.542819 | orchestrator | 2025-07-12 13:22:24.542830 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-07-12 13:22:24.542840 | orchestrator | Saturday 12 July 2025 13:22:14 +0000 (0:00:00.222) 0:00:15.106 ********* 2025-07-12 13:22:24.542851 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:24.542862 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:22:24.542872 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:22:24.542883 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:22:24.542909 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:24.542931 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:24.542942 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:24.542952 | orchestrator | 2025-07-12 13:22:24.542963 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-07-12 13:22:24.542973 | orchestrator | Saturday 12 July 2025 13:22:15 +0000 (0:00:00.525) 0:00:15.631 ********* 2025-07-12 13:22:24.543026 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:22:24.543038 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:22:24.543049 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:22:24.543060 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:22:24.543070 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:22:24.543081 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:22:24.543091 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:22:24.543101 | orchestrator | 2025-07-12 13:22:24.543112 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-07-12 13:22:24.543123 | orchestrator | Saturday 12 July 2025 13:22:15 +0000 (0:00:00.249) 0:00:15.881 ********* 2025-07-12 13:22:24.543134 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:24.543153 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:22:24.543164 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:22:24.543174 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:22:24.543185 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:22:24.543195 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:22:24.543205 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:22:24.543216 | orchestrator | 2025-07-12 13:22:24.543249 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-07-12 13:22:24.543260 | orchestrator | Saturday 12 July 2025 13:22:16 +0000 (0:00:00.583) 0:00:16.464 ********* 2025-07-12 13:22:24.543270 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:24.543281 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:22:24.543292 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:22:24.543302 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:22:24.543313 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:22:24.543323 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:22:24.543333 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:22:24.543344 | orchestrator | 2025-07-12 13:22:24.543355 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-07-12 13:22:24.543371 | orchestrator | Saturday 12 July 2025 13:22:17 +0000 (0:00:01.182) 0:00:17.647 ********* 2025-07-12 13:22:24.543382 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:24.543393 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:24.543403 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:22:24.543414 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:22:24.543425 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:22:24.543435 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:24.543445 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:24.543456 | orchestrator | 2025-07-12 13:22:24.543467 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-07-12 13:22:24.543478 | orchestrator | Saturday 12 July 2025 13:22:18 +0000 (0:00:01.125) 0:00:18.772 ********* 2025-07-12 13:22:24.543508 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:22:24.543520 | orchestrator | 2025-07-12 13:22:24.543531 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-07-12 13:22:24.543541 | orchestrator | Saturday 12 July 2025 13:22:18 +0000 (0:00:00.366) 0:00:19.138 ********* 2025-07-12 13:22:24.543552 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:22:24.543562 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:22:24.543573 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:22:24.543583 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:22:24.543594 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:22:24.543604 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:22:24.543615 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:22:24.543625 | orchestrator | 2025-07-12 13:22:24.543636 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-07-12 13:22:24.543646 | orchestrator | Saturday 12 July 2025 13:22:20 +0000 (0:00:01.253) 0:00:20.392 ********* 2025-07-12 13:22:24.543657 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:24.543667 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:22:24.543678 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:22:24.543688 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:22:24.543699 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:24.543709 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:24.543720 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:24.543730 | orchestrator | 2025-07-12 13:22:24.543741 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-07-12 13:22:24.543751 | orchestrator | Saturday 12 July 2025 13:22:20 +0000 (0:00:00.237) 0:00:20.630 ********* 2025-07-12 13:22:24.543762 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:24.543773 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:22:24.543789 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:22:24.543800 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:22:24.543810 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:24.543821 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:24.543831 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:24.543842 | orchestrator | 2025-07-12 13:22:24.543852 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-07-12 13:22:24.543863 | orchestrator | Saturday 12 July 2025 13:22:20 +0000 (0:00:00.243) 0:00:20.874 ********* 2025-07-12 13:22:24.543874 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:24.543884 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:22:24.543895 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:22:24.543905 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:22:24.543915 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:24.543926 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:24.543936 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:24.543947 | orchestrator | 2025-07-12 13:22:24.543957 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-07-12 13:22:24.543968 | orchestrator | Saturday 12 July 2025 13:22:20 +0000 (0:00:00.220) 0:00:21.094 ********* 2025-07-12 13:22:24.543979 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:22:24.543992 | orchestrator | 2025-07-12 13:22:24.544016 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-07-12 13:22:24.544028 | orchestrator | Saturday 12 July 2025 13:22:21 +0000 (0:00:00.300) 0:00:21.395 ********* 2025-07-12 13:22:24.544038 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:24.544049 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:22:24.544059 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:22:24.544069 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:24.544080 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:22:24.544090 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:24.544100 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:24.544111 | orchestrator | 2025-07-12 13:22:24.544121 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-07-12 13:22:24.544132 | orchestrator | Saturday 12 July 2025 13:22:21 +0000 (0:00:00.532) 0:00:21.927 ********* 2025-07-12 13:22:24.544142 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:22:24.544153 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:22:24.544164 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:22:24.544174 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:22:24.544185 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:22:24.544195 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:22:24.544205 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:22:24.544216 | orchestrator | 2025-07-12 13:22:24.544244 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-07-12 13:22:24.544255 | orchestrator | Saturday 12 July 2025 13:22:21 +0000 (0:00:00.218) 0:00:22.145 ********* 2025-07-12 13:22:24.544266 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:24.544276 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:22:24.544287 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:22:24.544297 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:24.544308 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:22:24.544318 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:24.544329 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:24.544339 | orchestrator | 2025-07-12 13:22:24.544350 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-07-12 13:22:24.544366 | orchestrator | Saturday 12 July 2025 13:22:22 +0000 (0:00:01.015) 0:00:23.161 ********* 2025-07-12 13:22:24.544377 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:24.544388 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:22:24.544398 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:22:24.544409 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:22:24.544419 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:24.544436 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:24.544447 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:24.544457 | orchestrator | 2025-07-12 13:22:24.544468 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-07-12 13:22:24.544479 | orchestrator | Saturday 12 July 2025 13:22:23 +0000 (0:00:00.594) 0:00:23.756 ********* 2025-07-12 13:22:24.544490 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:24.544500 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:22:24.544511 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:24.544521 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:24.544538 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:23:01.711888 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:23:01.712009 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:23:01.712024 | orchestrator | 2025-07-12 13:23:01.712037 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-07-12 13:23:01.712050 | orchestrator | Saturday 12 July 2025 13:22:24 +0000 (0:00:01.046) 0:00:24.802 ********* 2025-07-12 13:23:01.712061 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:23:01.712073 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:23:01.712084 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:23:01.712095 | orchestrator | changed: [testbed-manager] 2025-07-12 13:23:01.712106 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:23:01.712116 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:23:01.712127 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:23:01.712138 | orchestrator | 2025-07-12 13:23:01.712149 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-07-12 13:23:01.712161 | orchestrator | Saturday 12 July 2025 13:22:38 +0000 (0:00:14.253) 0:00:39.055 ********* 2025-07-12 13:23:01.712171 | orchestrator | ok: [testbed-manager] 2025-07-12 13:23:01.712182 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:23:01.712193 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:23:01.712203 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:23:01.712214 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:23:01.712224 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:23:01.712295 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:23:01.712309 | orchestrator | 2025-07-12 13:23:01.712320 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-07-12 13:23:01.712330 | orchestrator | Saturday 12 July 2025 13:22:39 +0000 (0:00:00.246) 0:00:39.302 ********* 2025-07-12 13:23:01.712341 | orchestrator | ok: [testbed-manager] 2025-07-12 13:23:01.712351 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:23:01.712362 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:23:01.712373 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:23:01.712383 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:23:01.712394 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:23:01.712404 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:23:01.712415 | orchestrator | 2025-07-12 13:23:01.712427 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-07-12 13:23:01.712440 | orchestrator | Saturday 12 July 2025 13:22:39 +0000 (0:00:00.233) 0:00:39.535 ********* 2025-07-12 13:23:01.712452 | orchestrator | ok: [testbed-manager] 2025-07-12 13:23:01.712464 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:23:01.712476 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:23:01.712488 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:23:01.712500 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:23:01.712512 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:23:01.712525 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:23:01.712536 | orchestrator | 2025-07-12 13:23:01.712549 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-07-12 13:23:01.712561 | orchestrator | Saturday 12 July 2025 13:22:39 +0000 (0:00:00.241) 0:00:39.777 ********* 2025-07-12 13:23:01.712576 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:23:01.712617 | orchestrator | 2025-07-12 13:23:01.712631 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-07-12 13:23:01.712644 | orchestrator | Saturday 12 July 2025 13:22:39 +0000 (0:00:00.285) 0:00:40.063 ********* 2025-07-12 13:23:01.712655 | orchestrator | ok: [testbed-manager] 2025-07-12 13:23:01.712668 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:23:01.712680 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:23:01.712692 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:23:01.712705 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:23:01.712717 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:23:01.712728 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:23:01.712740 | orchestrator | 2025-07-12 13:23:01.712752 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-07-12 13:23:01.712765 | orchestrator | Saturday 12 July 2025 13:22:41 +0000 (0:00:01.642) 0:00:41.705 ********* 2025-07-12 13:23:01.712778 | orchestrator | changed: [testbed-manager] 2025-07-12 13:23:01.712788 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:23:01.712799 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:23:01.712809 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:23:01.712820 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:23:01.712830 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:23:01.712841 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:23:01.712851 | orchestrator | 2025-07-12 13:23:01.712862 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-07-12 13:23:01.712873 | orchestrator | Saturday 12 July 2025 13:22:42 +0000 (0:00:01.042) 0:00:42.748 ********* 2025-07-12 13:23:01.712883 | orchestrator | ok: [testbed-manager] 2025-07-12 13:23:01.712898 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:23:01.712915 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:23:01.712941 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:23:01.712961 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:23:01.712978 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:23:01.712994 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:23:01.713011 | orchestrator | 2025-07-12 13:23:01.713028 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-07-12 13:23:01.713044 | orchestrator | Saturday 12 July 2025 13:22:43 +0000 (0:00:00.834) 0:00:43.582 ********* 2025-07-12 13:23:01.713063 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:23:01.713082 | orchestrator | 2025-07-12 13:23:01.713100 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-07-12 13:23:01.713119 | orchestrator | Saturday 12 July 2025 13:22:43 +0000 (0:00:00.306) 0:00:43.889 ********* 2025-07-12 13:23:01.713165 | orchestrator | changed: [testbed-manager] 2025-07-12 13:23:01.713183 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:23:01.713201 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:23:01.713219 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:23:01.713266 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:23:01.713285 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:23:01.713304 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:23:01.713322 | orchestrator | 2025-07-12 13:23:01.713367 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-07-12 13:23:01.713388 | orchestrator | Saturday 12 July 2025 13:22:44 +0000 (0:00:01.031) 0:00:44.921 ********* 2025-07-12 13:23:01.713406 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:23:01.713425 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:23:01.713443 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:23:01.713461 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:23:01.713479 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:23:01.713497 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:23:01.713515 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:23:01.713534 | orchestrator | 2025-07-12 13:23:01.713553 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-07-12 13:23:01.713589 | orchestrator | Saturday 12 July 2025 13:22:44 +0000 (0:00:00.340) 0:00:45.261 ********* 2025-07-12 13:23:01.713609 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:23:01.713627 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:23:01.713645 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:23:01.713664 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:23:01.713682 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:23:01.713700 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:23:01.713717 | orchestrator | changed: [testbed-manager] 2025-07-12 13:23:01.713733 | orchestrator | 2025-07-12 13:23:01.713744 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-07-12 13:23:01.713755 | orchestrator | Saturday 12 July 2025 13:22:56 +0000 (0:00:11.657) 0:00:56.919 ********* 2025-07-12 13:23:01.713766 | orchestrator | ok: [testbed-manager] 2025-07-12 13:23:01.713777 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:23:01.713787 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:23:01.713797 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:23:01.713808 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:23:01.713818 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:23:01.713828 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:23:01.713839 | orchestrator | 2025-07-12 13:23:01.713850 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-07-12 13:23:01.713860 | orchestrator | Saturday 12 July 2025 13:22:57 +0000 (0:00:01.007) 0:00:57.926 ********* 2025-07-12 13:23:01.713871 | orchestrator | ok: [testbed-manager] 2025-07-12 13:23:01.713881 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:23:01.713891 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:23:01.713901 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:23:01.713911 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:23:01.713922 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:23:01.713932 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:23:01.713942 | orchestrator | 2025-07-12 13:23:01.713953 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-07-12 13:23:01.713963 | orchestrator | Saturday 12 July 2025 13:22:58 +0000 (0:00:00.889) 0:00:58.816 ********* 2025-07-12 13:23:01.713974 | orchestrator | ok: [testbed-manager] 2025-07-12 13:23:01.714003 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:23:01.714014 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:23:01.714092 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:23:01.714111 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:23:01.714130 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:23:01.714148 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:23:01.714166 | orchestrator | 2025-07-12 13:23:01.714184 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-07-12 13:23:01.714204 | orchestrator | Saturday 12 July 2025 13:22:58 +0000 (0:00:00.211) 0:00:59.027 ********* 2025-07-12 13:23:01.714222 | orchestrator | ok: [testbed-manager] 2025-07-12 13:23:01.714266 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:23:01.714286 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:23:01.714303 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:23:01.714321 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:23:01.714339 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:23:01.714359 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:23:01.714370 | orchestrator | 2025-07-12 13:23:01.714381 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-07-12 13:23:01.714392 | orchestrator | Saturday 12 July 2025 13:22:58 +0000 (0:00:00.219) 0:00:59.246 ********* 2025-07-12 13:23:01.714403 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:23:01.714415 | orchestrator | 2025-07-12 13:23:01.714426 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-07-12 13:23:01.714436 | orchestrator | Saturday 12 July 2025 13:22:59 +0000 (0:00:00.281) 0:00:59.528 ********* 2025-07-12 13:23:01.714458 | orchestrator | ok: [testbed-manager] 2025-07-12 13:23:01.714468 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:23:01.714479 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:23:01.714489 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:23:01.714500 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:23:01.714510 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:23:01.714521 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:23:01.714531 | orchestrator | 2025-07-12 13:23:01.714542 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-07-12 13:23:01.714553 | orchestrator | Saturday 12 July 2025 13:23:00 +0000 (0:00:01.658) 0:01:01.187 ********* 2025-07-12 13:23:01.714563 | orchestrator | changed: [testbed-manager] 2025-07-12 13:23:01.714574 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:23:01.714584 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:23:01.714595 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:23:01.714613 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:23:01.714624 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:23:01.714634 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:23:01.714645 | orchestrator | 2025-07-12 13:23:01.714655 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-07-12 13:23:01.714666 | orchestrator | Saturday 12 July 2025 13:23:01 +0000 (0:00:00.561) 0:01:01.749 ********* 2025-07-12 13:23:01.714677 | orchestrator | ok: [testbed-manager] 2025-07-12 13:23:01.714687 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:23:01.714698 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:23:01.714708 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:23:01.714719 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:23:01.714729 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:23:01.714740 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:23:01.714751 | orchestrator | 2025-07-12 13:23:01.714775 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-07-12 13:25:18.942490 | orchestrator | Saturday 12 July 2025 13:23:01 +0000 (0:00:00.218) 0:01:01.967 ********* 2025-07-12 13:25:18.942616 | orchestrator | ok: [testbed-manager] 2025-07-12 13:25:18.942634 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:25:18.942646 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:25:18.942657 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:25:18.942668 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:25:18.942678 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:25:18.942689 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:25:18.942700 | orchestrator | 2025-07-12 13:25:18.942712 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-07-12 13:25:18.942723 | orchestrator | Saturday 12 July 2025 13:23:02 +0000 (0:00:01.237) 0:01:03.205 ********* 2025-07-12 13:25:18.942734 | orchestrator | changed: [testbed-manager] 2025-07-12 13:25:18.942745 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:25:18.942756 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:25:18.942766 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:25:18.942777 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:25:18.942788 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:25:18.942798 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:25:18.942809 | orchestrator | 2025-07-12 13:25:18.942820 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-07-12 13:25:18.942831 | orchestrator | Saturday 12 July 2025 13:23:04 +0000 (0:00:01.680) 0:01:04.886 ********* 2025-07-12 13:25:18.942841 | orchestrator | ok: [testbed-manager] 2025-07-12 13:25:18.942852 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:25:18.942863 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:25:18.942874 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:25:18.942884 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:25:18.942895 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:25:18.942905 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:25:18.942916 | orchestrator | 2025-07-12 13:25:18.942927 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-07-12 13:25:18.942963 | orchestrator | Saturday 12 July 2025 13:23:06 +0000 (0:00:02.325) 0:01:07.212 ********* 2025-07-12 13:25:18.942974 | orchestrator | ok: [testbed-manager] 2025-07-12 13:25:18.942985 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:25:18.942995 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:25:18.943006 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:25:18.943017 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:25:18.943029 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:25:18.943040 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:25:18.943052 | orchestrator | 2025-07-12 13:25:18.943064 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-07-12 13:25:18.943076 | orchestrator | Saturday 12 July 2025 13:23:47 +0000 (0:00:40.223) 0:01:47.435 ********* 2025-07-12 13:25:18.943088 | orchestrator | changed: [testbed-manager] 2025-07-12 13:25:18.943100 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:25:18.943111 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:25:18.943123 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:25:18.943134 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:25:18.943146 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:25:18.943158 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:25:18.943170 | orchestrator | 2025-07-12 13:25:18.943182 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-07-12 13:25:18.943194 | orchestrator | Saturday 12 July 2025 13:25:03 +0000 (0:01:16.360) 0:03:03.796 ********* 2025-07-12 13:25:18.943206 | orchestrator | ok: [testbed-manager] 2025-07-12 13:25:18.943218 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:25:18.943230 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:25:18.943242 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:25:18.943253 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:25:18.943265 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:25:18.943276 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:25:18.943319 | orchestrator | 2025-07-12 13:25:18.943332 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-07-12 13:25:18.943345 | orchestrator | Saturday 12 July 2025 13:25:05 +0000 (0:00:01.873) 0:03:05.669 ********* 2025-07-12 13:25:18.943357 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:25:18.943369 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:25:18.943380 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:25:18.943391 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:25:18.943401 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:25:18.943412 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:25:18.943423 | orchestrator | changed: [testbed-manager] 2025-07-12 13:25:18.943433 | orchestrator | 2025-07-12 13:25:18.943444 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-07-12 13:25:18.943454 | orchestrator | Saturday 12 July 2025 13:25:17 +0000 (0:00:12.169) 0:03:17.839 ********* 2025-07-12 13:25:18.943479 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-07-12 13:25:18.943510 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-07-12 13:25:18.943547 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-07-12 13:25:18.943572 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-07-12 13:25:18.943584 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-07-12 13:25:18.943595 | orchestrator | 2025-07-12 13:25:18.943606 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-07-12 13:25:18.943617 | orchestrator | Saturday 12 July 2025 13:25:17 +0000 (0:00:00.433) 0:03:18.273 ********* 2025-07-12 13:25:18.943628 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-12 13:25:18.943640 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:25:18.943651 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-12 13:25:18.943661 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:25:18.943672 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-12 13:25:18.943683 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:25:18.943693 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-12 13:25:18.943704 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:25:18.943714 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-12 13:25:18.943725 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-12 13:25:18.943735 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-12 13:25:18.943746 | orchestrator | 2025-07-12 13:25:18.943757 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-07-12 13:25:18.943767 | orchestrator | Saturday 12 July 2025 13:25:18 +0000 (0:00:00.745) 0:03:19.018 ********* 2025-07-12 13:25:18.943778 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-12 13:25:18.943790 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-12 13:25:18.943800 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-12 13:25:18.943811 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-12 13:25:18.943822 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-12 13:25:18.943833 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-12 13:25:18.943843 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-12 13:25:18.943854 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-12 13:25:18.943864 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-12 13:25:18.943875 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-12 13:25:18.943886 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:25:18.943896 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-12 13:25:18.943914 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-12 13:25:18.943925 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-12 13:25:18.943935 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-12 13:25:18.943946 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-12 13:25:18.943957 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-12 13:25:18.943967 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-12 13:25:18.943978 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-12 13:25:18.943989 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-12 13:25:18.944000 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-12 13:25:18.944017 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-12 13:25:26.480170 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-12 13:25:26.480328 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-12 13:25:26.480352 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-12 13:25:26.480369 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-12 13:25:26.480384 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:25:26.480399 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-12 13:25:26.480412 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-12 13:25:26.480427 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-12 13:25:26.480441 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-12 13:25:26.480456 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-12 13:25:26.480470 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-12 13:25:26.480485 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:25:26.480500 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-12 13:25:26.480514 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-12 13:25:26.480527 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-12 13:25:26.480539 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-12 13:25:26.480551 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-12 13:25:26.480564 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-12 13:25:26.480576 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-12 13:25:26.480589 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-12 13:25:26.480603 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-12 13:25:26.480617 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:25:26.480631 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-07-12 13:25:26.480645 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-07-12 13:25:26.480680 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-07-12 13:25:26.480693 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-07-12 13:25:26.480709 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-07-12 13:25:26.480724 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-07-12 13:25:26.480740 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-07-12 13:25:26.480775 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-07-12 13:25:26.480791 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-07-12 13:25:26.480808 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-07-12 13:25:26.480824 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-07-12 13:25:26.480839 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-07-12 13:25:26.480856 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-07-12 13:25:26.480872 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-07-12 13:25:26.480887 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-07-12 13:25:26.480908 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-07-12 13:25:26.480924 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-07-12 13:25:26.480940 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-07-12 13:25:26.480955 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-07-12 13:25:26.480970 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-07-12 13:25:26.480987 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-07-12 13:25:26.481025 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-07-12 13:25:26.481041 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-07-12 13:25:26.481056 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-07-12 13:25:26.481069 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-07-12 13:25:26.481082 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-07-12 13:25:26.481096 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-07-12 13:25:26.481109 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-07-12 13:25:26.481123 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-07-12 13:25:26.481136 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-07-12 13:25:26.481149 | orchestrator | 2025-07-12 13:25:26.481163 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-07-12 13:25:26.481177 | orchestrator | Saturday 12 July 2025 13:25:24 +0000 (0:00:05.735) 0:03:24.754 ********* 2025-07-12 13:25:26.481192 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 13:25:26.481206 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 13:25:26.481231 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 13:25:26.481245 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 13:25:26.481260 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 13:25:26.481273 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 13:25:26.481312 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 13:25:26.481325 | orchestrator | 2025-07-12 13:25:26.481338 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-07-12 13:25:26.481351 | orchestrator | Saturday 12 July 2025 13:25:25 +0000 (0:00:00.589) 0:03:25.344 ********* 2025-07-12 13:25:26.481364 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-12 13:25:26.481378 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-12 13:25:26.481392 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:25:26.481411 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:25:26.481426 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-12 13:25:26.481441 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-12 13:25:26.481455 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:25:26.481468 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:25:26.481482 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-07-12 13:25:26.481495 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-07-12 13:25:26.481509 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-07-12 13:25:26.481523 | orchestrator | 2025-07-12 13:25:26.481537 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-07-12 13:25:26.481551 | orchestrator | Saturday 12 July 2025 13:25:25 +0000 (0:00:00.545) 0:03:25.889 ********* 2025-07-12 13:25:26.481565 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-12 13:25:26.481578 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-12 13:25:26.481592 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:25:26.481605 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-12 13:25:26.481619 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-12 13:25:26.481633 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:25:26.481646 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:25:26.481660 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:25:26.481679 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-07-12 13:25:26.481693 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-07-12 13:25:26.481707 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-07-12 13:25:26.481720 | orchestrator | 2025-07-12 13:25:26.481733 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-07-12 13:25:26.481746 | orchestrator | Saturday 12 July 2025 13:25:26 +0000 (0:00:00.611) 0:03:26.500 ********* 2025-07-12 13:25:26.481760 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:25:26.481774 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:25:26.481787 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:25:26.481800 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:25:26.481814 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:25:26.481846 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:25:38.272372 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:25:38.272490 | orchestrator | 2025-07-12 13:25:38.272508 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-07-12 13:25:38.272522 | orchestrator | Saturday 12 July 2025 13:25:26 +0000 (0:00:00.245) 0:03:26.746 ********* 2025-07-12 13:25:38.272533 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:25:38.272545 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:25:38.272556 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:25:38.272567 | orchestrator | ok: [testbed-manager] 2025-07-12 13:25:38.272578 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:25:38.272589 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:25:38.272599 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:25:38.272610 | orchestrator | 2025-07-12 13:25:38.272621 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-07-12 13:25:38.272632 | orchestrator | Saturday 12 July 2025 13:25:32 +0000 (0:00:05.720) 0:03:32.466 ********* 2025-07-12 13:25:38.272643 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-07-12 13:25:38.272654 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:25:38.272665 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-07-12 13:25:38.272675 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:25:38.272686 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-07-12 13:25:38.272697 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:25:38.272707 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-07-12 13:25:38.272718 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:25:38.272728 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-07-12 13:25:38.272739 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-07-12 13:25:38.272749 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:25:38.272760 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:25:38.272771 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-07-12 13:25:38.272781 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:25:38.272792 | orchestrator | 2025-07-12 13:25:38.272803 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-07-12 13:25:38.272813 | orchestrator | Saturday 12 July 2025 13:25:32 +0000 (0:00:00.304) 0:03:32.771 ********* 2025-07-12 13:25:38.272824 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-07-12 13:25:38.272835 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-07-12 13:25:38.272846 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-07-12 13:25:38.272857 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-07-12 13:25:38.272870 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-07-12 13:25:38.272882 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-07-12 13:25:38.272894 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-07-12 13:25:38.272906 | orchestrator | 2025-07-12 13:25:38.272918 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-07-12 13:25:38.272931 | orchestrator | Saturday 12 July 2025 13:25:33 +0000 (0:00:01.112) 0:03:33.884 ********* 2025-07-12 13:25:38.272946 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:25:38.272960 | orchestrator | 2025-07-12 13:25:38.272973 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-07-12 13:25:38.272985 | orchestrator | Saturday 12 July 2025 13:25:34 +0000 (0:00:00.584) 0:03:34.468 ********* 2025-07-12 13:25:38.272997 | orchestrator | ok: [testbed-manager] 2025-07-12 13:25:38.273009 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:25:38.273021 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:25:38.273033 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:25:38.273045 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:25:38.273056 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:25:38.273068 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:25:38.273108 | orchestrator | 2025-07-12 13:25:38.273120 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-07-12 13:25:38.273133 | orchestrator | Saturday 12 July 2025 13:25:35 +0000 (0:00:01.363) 0:03:35.832 ********* 2025-07-12 13:25:38.273145 | orchestrator | ok: [testbed-manager] 2025-07-12 13:25:38.273157 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:25:38.273169 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:25:38.273181 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:25:38.273193 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:25:38.273205 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:25:38.273217 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:25:38.273228 | orchestrator | 2025-07-12 13:25:38.273239 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-07-12 13:25:38.273250 | orchestrator | Saturday 12 July 2025 13:25:36 +0000 (0:00:00.619) 0:03:36.451 ********* 2025-07-12 13:25:38.273260 | orchestrator | changed: [testbed-manager] 2025-07-12 13:25:38.273271 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:25:38.273281 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:25:38.273316 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:25:38.273327 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:25:38.273338 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:25:38.273349 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:25:38.273359 | orchestrator | 2025-07-12 13:25:38.273370 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-07-12 13:25:38.273395 | orchestrator | Saturday 12 July 2025 13:25:36 +0000 (0:00:00.579) 0:03:37.030 ********* 2025-07-12 13:25:38.273407 | orchestrator | ok: [testbed-manager] 2025-07-12 13:25:38.273417 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:25:38.273428 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:25:38.273438 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:25:38.273449 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:25:38.273459 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:25:38.273470 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:25:38.273496 | orchestrator | 2025-07-12 13:25:38.273508 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-07-12 13:25:38.273518 | orchestrator | Saturday 12 July 2025 13:25:37 +0000 (0:00:00.563) 0:03:37.593 ********* 2025-07-12 13:25:38.273552 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1752325290.641, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 13:25:38.273568 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1752325403.0175374, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 13:25:38.273580 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1752325421.0857105, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 13:25:38.273600 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1752325401.3015237, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 13:25:38.273611 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1752325407.8234844, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 13:25:38.273622 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1752325415.1658788, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 13:25:38.273634 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1752325411.6195045, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 13:25:38.273664 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1752325382.6581087, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 13:26:03.294143 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1752325295.0824652, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 13:26:03.294281 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1752325312.7033458, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 13:26:03.294349 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1752325307.9991145, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 13:26:03.294364 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1752325297.3093479, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 13:26:03.294376 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1752325308.5979564, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 13:26:03.294392 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1752325305.3218045, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 13:26:03.294405 | orchestrator | 2025-07-12 13:26:03.294418 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-07-12 13:26:03.294432 | orchestrator | Saturday 12 July 2025 13:25:38 +0000 (0:00:00.934) 0:03:38.528 ********* 2025-07-12 13:26:03.294443 | orchestrator | changed: [testbed-manager] 2025-07-12 13:26:03.294454 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:26:03.294465 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:26:03.294476 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:26:03.294487 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:26:03.294498 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:26:03.294508 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:26:03.294519 | orchestrator | 2025-07-12 13:26:03.294529 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-07-12 13:26:03.294540 | orchestrator | Saturday 12 July 2025 13:25:39 +0000 (0:00:01.152) 0:03:39.681 ********* 2025-07-12 13:26:03.294551 | orchestrator | changed: [testbed-manager] 2025-07-12 13:26:03.294562 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:26:03.294572 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:26:03.294583 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:26:03.294611 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:26:03.294623 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:26:03.294634 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:26:03.294645 | orchestrator | 2025-07-12 13:26:03.294656 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-07-12 13:26:03.294666 | orchestrator | Saturday 12 July 2025 13:25:40 +0000 (0:00:01.122) 0:03:40.803 ********* 2025-07-12 13:26:03.294677 | orchestrator | changed: [testbed-manager] 2025-07-12 13:26:03.294697 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:26:03.294708 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:26:03.294718 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:26:03.294728 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:26:03.294739 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:26:03.294749 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:26:03.294759 | orchestrator | 2025-07-12 13:26:03.294770 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-07-12 13:26:03.294801 | orchestrator | Saturday 12 July 2025 13:25:41 +0000 (0:00:01.136) 0:03:41.939 ********* 2025-07-12 13:26:03.294812 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:26:03.294823 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:26:03.294834 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:26:03.294844 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:26:03.294855 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:26:03.294866 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:26:03.294876 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:26:03.294887 | orchestrator | 2025-07-12 13:26:03.294898 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-07-12 13:26:03.294909 | orchestrator | Saturday 12 July 2025 13:25:41 +0000 (0:00:00.276) 0:03:42.215 ********* 2025-07-12 13:26:03.294932 | orchestrator | ok: [testbed-manager] 2025-07-12 13:26:03.294944 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:26:03.294954 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:26:03.294965 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:26:03.294976 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:26:03.294986 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:26:03.294997 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:26:03.295007 | orchestrator | 2025-07-12 13:26:03.295018 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-07-12 13:26:03.295029 | orchestrator | Saturday 12 July 2025 13:25:42 +0000 (0:00:00.745) 0:03:42.960 ********* 2025-07-12 13:26:03.295042 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:26:03.295055 | orchestrator | 2025-07-12 13:26:03.295066 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-07-12 13:26:03.295077 | orchestrator | Saturday 12 July 2025 13:25:43 +0000 (0:00:00.397) 0:03:43.358 ********* 2025-07-12 13:26:03.295087 | orchestrator | ok: [testbed-manager] 2025-07-12 13:26:03.295098 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:26:03.295109 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:26:03.295119 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:26:03.295130 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:26:03.295141 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:26:03.295152 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:26:03.295162 | orchestrator | 2025-07-12 13:26:03.295173 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-07-12 13:26:03.295184 | orchestrator | Saturday 12 July 2025 13:25:51 +0000 (0:00:07.934) 0:03:51.293 ********* 2025-07-12 13:26:03.295207 | orchestrator | ok: [testbed-manager] 2025-07-12 13:26:03.295218 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:26:03.295229 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:26:03.295240 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:26:03.295250 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:26:03.295261 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:26:03.295271 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:26:03.295282 | orchestrator | 2025-07-12 13:26:03.295293 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-07-12 13:26:03.295331 | orchestrator | Saturday 12 July 2025 13:25:52 +0000 (0:00:01.243) 0:03:52.537 ********* 2025-07-12 13:26:03.295343 | orchestrator | ok: [testbed-manager] 2025-07-12 13:26:03.295353 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:26:03.295371 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:26:03.295382 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:26:03.295392 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:26:03.295403 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:26:03.295413 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:26:03.295424 | orchestrator | 2025-07-12 13:26:03.295435 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-07-12 13:26:03.295446 | orchestrator | Saturday 12 July 2025 13:25:53 +0000 (0:00:01.054) 0:03:53.591 ********* 2025-07-12 13:26:03.295463 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:26:03.295474 | orchestrator | 2025-07-12 13:26:03.295485 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-07-12 13:26:03.295496 | orchestrator | Saturday 12 July 2025 13:25:53 +0000 (0:00:00.536) 0:03:54.127 ********* 2025-07-12 13:26:03.295506 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:26:03.295517 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:26:03.295527 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:26:03.295538 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:26:03.295549 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:26:03.295559 | orchestrator | changed: [testbed-manager] 2025-07-12 13:26:03.295570 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:26:03.295580 | orchestrator | 2025-07-12 13:26:03.295591 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-07-12 13:26:03.295602 | orchestrator | Saturday 12 July 2025 13:26:02 +0000 (0:00:08.778) 0:04:02.906 ********* 2025-07-12 13:26:03.295613 | orchestrator | changed: [testbed-manager] 2025-07-12 13:26:03.295623 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:26:03.295634 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:26:03.295652 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:27:14.217578 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:27:14.217701 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:27:14.217716 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:27:14.217729 | orchestrator | 2025-07-12 13:27:14.217742 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-07-12 13:27:14.217755 | orchestrator | Saturday 12 July 2025 13:26:03 +0000 (0:00:00.648) 0:04:03.555 ********* 2025-07-12 13:27:14.217766 | orchestrator | changed: [testbed-manager] 2025-07-12 13:27:14.217777 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:27:14.217787 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:27:14.217798 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:27:14.217808 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:27:14.217819 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:27:14.217829 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:27:14.217840 | orchestrator | 2025-07-12 13:27:14.217851 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-07-12 13:27:14.217862 | orchestrator | Saturday 12 July 2025 13:26:04 +0000 (0:00:01.091) 0:04:04.647 ********* 2025-07-12 13:27:14.217873 | orchestrator | changed: [testbed-manager] 2025-07-12 13:27:14.217883 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:27:14.217894 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:27:14.217904 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:27:14.217915 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:27:14.217925 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:27:14.217936 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:27:14.217946 | orchestrator | 2025-07-12 13:27:14.217957 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-07-12 13:27:14.217968 | orchestrator | Saturday 12 July 2025 13:26:05 +0000 (0:00:01.034) 0:04:05.681 ********* 2025-07-12 13:27:14.217978 | orchestrator | ok: [testbed-manager] 2025-07-12 13:27:14.217990 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:27:14.218083 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:27:14.218099 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:27:14.218111 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:27:14.218122 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:27:14.218134 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:27:14.218147 | orchestrator | 2025-07-12 13:27:14.218159 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-07-12 13:27:14.218172 | orchestrator | Saturday 12 July 2025 13:26:05 +0000 (0:00:00.304) 0:04:05.986 ********* 2025-07-12 13:27:14.218184 | orchestrator | ok: [testbed-manager] 2025-07-12 13:27:14.218196 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:27:14.218208 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:27:14.218220 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:27:14.218231 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:27:14.218243 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:27:14.218255 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:27:14.218267 | orchestrator | 2025-07-12 13:27:14.218280 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-07-12 13:27:14.218292 | orchestrator | Saturday 12 July 2025 13:26:06 +0000 (0:00:00.319) 0:04:06.305 ********* 2025-07-12 13:27:14.218303 | orchestrator | ok: [testbed-manager] 2025-07-12 13:27:14.218313 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:27:14.218324 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:27:14.218335 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:27:14.218345 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:27:14.218356 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:27:14.218366 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:27:14.218377 | orchestrator | 2025-07-12 13:27:14.218388 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-07-12 13:27:14.218399 | orchestrator | Saturday 12 July 2025 13:26:06 +0000 (0:00:00.317) 0:04:06.623 ********* 2025-07-12 13:27:14.218409 | orchestrator | ok: [testbed-manager] 2025-07-12 13:27:14.218420 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:27:14.218430 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:27:14.218464 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:27:14.218475 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:27:14.218486 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:27:14.218496 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:27:14.218507 | orchestrator | 2025-07-12 13:27:14.218518 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-07-12 13:27:14.218528 | orchestrator | Saturday 12 July 2025 13:26:12 +0000 (0:00:05.829) 0:04:12.452 ********* 2025-07-12 13:27:14.218542 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:27:14.218555 | orchestrator | 2025-07-12 13:27:14.218566 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-07-12 13:27:14.218577 | orchestrator | Saturday 12 July 2025 13:26:12 +0000 (0:00:00.405) 0:04:12.857 ********* 2025-07-12 13:27:14.218587 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-07-12 13:27:14.218598 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-07-12 13:27:14.218624 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-07-12 13:27:14.218635 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:27:14.218645 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-07-12 13:27:14.218656 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-07-12 13:27:14.218667 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-07-12 13:27:14.218678 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:27:14.218688 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-07-12 13:27:14.218699 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:27:14.218709 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-07-12 13:27:14.218721 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-07-12 13:27:14.218740 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-07-12 13:27:14.218751 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:27:14.218762 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-07-12 13:27:14.218773 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-07-12 13:27:14.218784 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:27:14.218811 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:27:14.218823 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-07-12 13:27:14.218834 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-07-12 13:27:14.218844 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:27:14.218855 | orchestrator | 2025-07-12 13:27:14.218866 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-07-12 13:27:14.218877 | orchestrator | Saturday 12 July 2025 13:26:12 +0000 (0:00:00.361) 0:04:13.219 ********* 2025-07-12 13:27:14.218888 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:27:14.218899 | orchestrator | 2025-07-12 13:27:14.218910 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-07-12 13:27:14.218920 | orchestrator | Saturday 12 July 2025 13:26:13 +0000 (0:00:00.376) 0:04:13.596 ********* 2025-07-12 13:27:14.218931 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-07-12 13:27:14.218942 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-07-12 13:27:14.218952 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:27:14.218963 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:27:14.218974 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-07-12 13:27:14.218985 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-07-12 13:27:14.218996 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:27:14.219006 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-07-12 13:27:14.219017 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:27:14.219027 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:27:14.219038 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-07-12 13:27:14.219049 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:27:14.219059 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-07-12 13:27:14.219070 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:27:14.219080 | orchestrator | 2025-07-12 13:27:14.219091 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-07-12 13:27:14.219102 | orchestrator | Saturday 12 July 2025 13:26:13 +0000 (0:00:00.314) 0:04:13.911 ********* 2025-07-12 13:27:14.219113 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:27:14.219124 | orchestrator | 2025-07-12 13:27:14.219135 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-07-12 13:27:14.219146 | orchestrator | Saturday 12 July 2025 13:26:14 +0000 (0:00:00.559) 0:04:14.470 ********* 2025-07-12 13:27:14.219157 | orchestrator | changed: [testbed-manager] 2025-07-12 13:27:14.219168 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:27:14.219178 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:27:14.219189 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:27:14.219199 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:27:14.219210 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:27:14.219221 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:27:14.219231 | orchestrator | 2025-07-12 13:27:14.219242 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-07-12 13:27:14.219259 | orchestrator | Saturday 12 July 2025 13:26:49 +0000 (0:00:34.879) 0:04:49.349 ********* 2025-07-12 13:27:14.219270 | orchestrator | changed: [testbed-manager] 2025-07-12 13:27:14.219281 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:27:14.219291 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:27:14.219302 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:27:14.219312 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:27:14.219323 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:27:14.219334 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:27:14.219344 | orchestrator | 2025-07-12 13:27:14.219355 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-07-12 13:27:14.219366 | orchestrator | Saturday 12 July 2025 13:26:57 +0000 (0:00:08.506) 0:04:57.856 ********* 2025-07-12 13:27:14.219377 | orchestrator | changed: [testbed-manager] 2025-07-12 13:27:14.219387 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:27:14.219397 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:27:14.219408 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:27:14.219419 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:27:14.219429 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:27:14.219477 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:27:14.219488 | orchestrator | 2025-07-12 13:27:14.219499 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-07-12 13:27:14.219510 | orchestrator | Saturday 12 July 2025 13:27:06 +0000 (0:00:08.567) 0:05:06.423 ********* 2025-07-12 13:27:14.219520 | orchestrator | ok: [testbed-manager] 2025-07-12 13:27:14.219531 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:27:14.219542 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:27:14.219552 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:27:14.219563 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:27:14.219573 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:27:14.219584 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:27:14.219594 | orchestrator | 2025-07-12 13:27:14.219605 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-07-12 13:27:14.219616 | orchestrator | Saturday 12 July 2025 13:27:08 +0000 (0:00:01.897) 0:05:08.321 ********* 2025-07-12 13:27:14.219627 | orchestrator | changed: [testbed-manager] 2025-07-12 13:27:14.219637 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:27:14.219648 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:27:14.219659 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:27:14.219669 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:27:14.219680 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:27:14.219690 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:27:14.219701 | orchestrator | 2025-07-12 13:27:14.219712 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-07-12 13:27:14.219730 | orchestrator | Saturday 12 July 2025 13:27:14 +0000 (0:00:06.143) 0:05:14.464 ********* 2025-07-12 13:27:25.754334 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:27:25.754473 | orchestrator | 2025-07-12 13:27:25.754491 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-07-12 13:27:25.754504 | orchestrator | Saturday 12 July 2025 13:27:14 +0000 (0:00:00.449) 0:05:14.913 ********* 2025-07-12 13:27:25.754515 | orchestrator | changed: [testbed-manager] 2025-07-12 13:27:25.754527 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:27:25.754538 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:27:25.754549 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:27:25.754559 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:27:25.754570 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:27:25.754580 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:27:25.754591 | orchestrator | 2025-07-12 13:27:25.754602 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-07-12 13:27:25.754613 | orchestrator | Saturday 12 July 2025 13:27:15 +0000 (0:00:00.752) 0:05:15.666 ********* 2025-07-12 13:27:25.754647 | orchestrator | ok: [testbed-manager] 2025-07-12 13:27:25.754660 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:27:25.754670 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:27:25.754680 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:27:25.754691 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:27:25.754701 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:27:25.754712 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:27:25.754722 | orchestrator | 2025-07-12 13:27:25.754733 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-07-12 13:27:25.754743 | orchestrator | Saturday 12 July 2025 13:27:17 +0000 (0:00:01.799) 0:05:17.465 ********* 2025-07-12 13:27:25.754754 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:27:25.754765 | orchestrator | changed: [testbed-manager] 2025-07-12 13:27:25.754775 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:27:25.754786 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:27:25.754796 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:27:25.754823 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:27:25.754835 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:27:25.754845 | orchestrator | 2025-07-12 13:27:25.754856 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-07-12 13:27:25.754867 | orchestrator | Saturday 12 July 2025 13:27:18 +0000 (0:00:00.828) 0:05:18.293 ********* 2025-07-12 13:27:25.754877 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:27:25.754888 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:27:25.754898 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:27:25.754909 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:27:25.754919 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:27:25.754930 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:27:25.754940 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:27:25.754951 | orchestrator | 2025-07-12 13:27:25.754962 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-07-12 13:27:25.754972 | orchestrator | Saturday 12 July 2025 13:27:18 +0000 (0:00:00.294) 0:05:18.588 ********* 2025-07-12 13:27:25.754983 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:27:25.754994 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:27:25.755004 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:27:25.755014 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:27:25.755025 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:27:25.755036 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:27:25.755046 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:27:25.755057 | orchestrator | 2025-07-12 13:27:25.755068 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-07-12 13:27:25.755078 | orchestrator | Saturday 12 July 2025 13:27:18 +0000 (0:00:00.413) 0:05:19.002 ********* 2025-07-12 13:27:25.755089 | orchestrator | ok: [testbed-manager] 2025-07-12 13:27:25.755099 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:27:25.755110 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:27:25.755120 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:27:25.755130 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:27:25.755141 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:27:25.755151 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:27:25.755162 | orchestrator | 2025-07-12 13:27:25.755172 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-07-12 13:27:25.755183 | orchestrator | Saturday 12 July 2025 13:27:19 +0000 (0:00:00.290) 0:05:19.293 ********* 2025-07-12 13:27:25.755194 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:27:25.755204 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:27:25.755215 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:27:25.755225 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:27:25.755236 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:27:25.755246 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:27:25.755256 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:27:25.755267 | orchestrator | 2025-07-12 13:27:25.755285 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-07-12 13:27:25.755302 | orchestrator | Saturday 12 July 2025 13:27:19 +0000 (0:00:00.263) 0:05:19.556 ********* 2025-07-12 13:27:25.755313 | orchestrator | ok: [testbed-manager] 2025-07-12 13:27:25.755324 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:27:25.755334 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:27:25.755345 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:27:25.755355 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:27:25.755366 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:27:25.755377 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:27:25.755388 | orchestrator | 2025-07-12 13:27:25.755399 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-07-12 13:27:25.755409 | orchestrator | Saturday 12 July 2025 13:27:19 +0000 (0:00:00.298) 0:05:19.855 ********* 2025-07-12 13:27:25.755420 | orchestrator | ok: [testbed-manager] =>  2025-07-12 13:27:25.755431 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 13:27:25.755469 | orchestrator | ok: [testbed-node-0] =>  2025-07-12 13:27:25.755480 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 13:27:25.755491 | orchestrator | ok: [testbed-node-1] =>  2025-07-12 13:27:25.755501 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 13:27:25.755512 | orchestrator | ok: [testbed-node-2] =>  2025-07-12 13:27:25.755523 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 13:27:25.755533 | orchestrator | ok: [testbed-node-3] =>  2025-07-12 13:27:25.755544 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 13:27:25.755572 | orchestrator | ok: [testbed-node-4] =>  2025-07-12 13:27:25.755583 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 13:27:25.755594 | orchestrator | ok: [testbed-node-5] =>  2025-07-12 13:27:25.755604 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 13:27:25.755615 | orchestrator | 2025-07-12 13:27:25.755626 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-07-12 13:27:25.755637 | orchestrator | Saturday 12 July 2025 13:27:19 +0000 (0:00:00.326) 0:05:20.181 ********* 2025-07-12 13:27:25.755648 | orchestrator | ok: [testbed-manager] =>  2025-07-12 13:27:25.755658 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 13:27:25.755669 | orchestrator | ok: [testbed-node-0] =>  2025-07-12 13:27:25.755679 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 13:27:25.755690 | orchestrator | ok: [testbed-node-1] =>  2025-07-12 13:27:25.755700 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 13:27:25.755711 | orchestrator | ok: [testbed-node-2] =>  2025-07-12 13:27:25.755721 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 13:27:25.755731 | orchestrator | ok: [testbed-node-3] =>  2025-07-12 13:27:25.755742 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 13:27:25.755753 | orchestrator | ok: [testbed-node-4] =>  2025-07-12 13:27:25.755763 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 13:27:25.755774 | orchestrator | ok: [testbed-node-5] =>  2025-07-12 13:27:25.755784 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 13:27:25.755794 | orchestrator | 2025-07-12 13:27:25.755805 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-07-12 13:27:25.755816 | orchestrator | Saturday 12 July 2025 13:27:20 +0000 (0:00:00.408) 0:05:20.590 ********* 2025-07-12 13:27:25.755827 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:27:25.755837 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:27:25.755848 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:27:25.755858 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:27:25.755869 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:27:25.755879 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:27:25.755890 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:27:25.755900 | orchestrator | 2025-07-12 13:27:25.755911 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-07-12 13:27:25.755922 | orchestrator | Saturday 12 July 2025 13:27:20 +0000 (0:00:00.293) 0:05:20.883 ********* 2025-07-12 13:27:25.755932 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:27:25.755943 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:27:25.756040 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:27:25.756054 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:27:25.756065 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:27:25.756076 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:27:25.756086 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:27:25.756097 | orchestrator | 2025-07-12 13:27:25.756108 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-07-12 13:27:25.756118 | orchestrator | Saturday 12 July 2025 13:27:20 +0000 (0:00:00.271) 0:05:21.155 ********* 2025-07-12 13:27:25.756132 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:27:25.756144 | orchestrator | 2025-07-12 13:27:25.756155 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-07-12 13:27:25.756166 | orchestrator | Saturday 12 July 2025 13:27:21 +0000 (0:00:00.418) 0:05:21.573 ********* 2025-07-12 13:27:25.756177 | orchestrator | ok: [testbed-manager] 2025-07-12 13:27:25.756187 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:27:25.756198 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:27:25.756209 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:27:25.756219 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:27:25.756230 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:27:25.756240 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:27:25.756251 | orchestrator | 2025-07-12 13:27:25.756261 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-07-12 13:27:25.756272 | orchestrator | Saturday 12 July 2025 13:27:22 +0000 (0:00:01.089) 0:05:22.663 ********* 2025-07-12 13:27:25.756283 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:27:25.756293 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:27:25.756304 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:27:25.756314 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:27:25.756325 | orchestrator | ok: [testbed-manager] 2025-07-12 13:27:25.756335 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:27:25.756346 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:27:25.756356 | orchestrator | 2025-07-12 13:27:25.756367 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-07-12 13:27:25.756379 | orchestrator | Saturday 12 July 2025 13:27:25 +0000 (0:00:02.787) 0:05:25.450 ********* 2025-07-12 13:27:25.756390 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-07-12 13:27:25.756401 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-07-12 13:27:25.756417 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-07-12 13:27:25.756428 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-07-12 13:27:25.756472 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-07-12 13:27:25.756483 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-07-12 13:27:25.756494 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:27:25.756504 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-07-12 13:27:25.756515 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-07-12 13:27:25.756526 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-07-12 13:27:25.756536 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:27:25.756547 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-07-12 13:27:25.756557 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-07-12 13:27:25.756568 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-07-12 13:27:25.756578 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:27:25.756589 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-07-12 13:27:25.756600 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-07-12 13:27:25.756619 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-07-12 13:28:24.646364 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:28:24.646528 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-07-12 13:28:24.646548 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-07-12 13:28:24.646560 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-07-12 13:28:24.646570 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:28:24.646581 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:28:24.646592 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-07-12 13:28:24.646603 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-07-12 13:28:24.646613 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-07-12 13:28:24.646624 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:28:24.646635 | orchestrator | 2025-07-12 13:28:24.646647 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-07-12 13:28:24.646659 | orchestrator | Saturday 12 July 2025 13:27:25 +0000 (0:00:00.781) 0:05:26.231 ********* 2025-07-12 13:28:24.646670 | orchestrator | ok: [testbed-manager] 2025-07-12 13:28:24.646681 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:28:24.646692 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:28:24.646702 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:28:24.646713 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:28:24.646723 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:28:24.646733 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:28:24.646744 | orchestrator | 2025-07-12 13:28:24.646754 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-07-12 13:28:24.646765 | orchestrator | Saturday 12 July 2025 13:27:32 +0000 (0:00:06.137) 0:05:32.369 ********* 2025-07-12 13:28:24.646776 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:28:24.646786 | orchestrator | ok: [testbed-manager] 2025-07-12 13:28:24.646797 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:28:24.646807 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:28:24.646817 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:28:24.646828 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:28:24.646838 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:28:24.646848 | orchestrator | 2025-07-12 13:28:24.646859 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-07-12 13:28:24.646870 | orchestrator | Saturday 12 July 2025 13:27:33 +0000 (0:00:01.069) 0:05:33.439 ********* 2025-07-12 13:28:24.646880 | orchestrator | ok: [testbed-manager] 2025-07-12 13:28:24.646891 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:28:24.646903 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:28:24.646916 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:28:24.646928 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:28:24.646940 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:28:24.646952 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:28:24.646964 | orchestrator | 2025-07-12 13:28:24.646979 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-07-12 13:28:24.646998 | orchestrator | Saturday 12 July 2025 13:27:40 +0000 (0:00:07.518) 0:05:40.957 ********* 2025-07-12 13:28:24.647019 | orchestrator | changed: [testbed-manager] 2025-07-12 13:28:24.647038 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:28:24.647056 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:28:24.647070 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:28:24.647089 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:28:24.647108 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:28:24.647120 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:28:24.647132 | orchestrator | 2025-07-12 13:28:24.647145 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-07-12 13:28:24.647157 | orchestrator | Saturday 12 July 2025 13:27:44 +0000 (0:00:03.534) 0:05:44.491 ********* 2025-07-12 13:28:24.647176 | orchestrator | ok: [testbed-manager] 2025-07-12 13:28:24.647196 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:28:24.647208 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:28:24.647221 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:28:24.647259 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:28:24.647270 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:28:24.647281 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:28:24.647291 | orchestrator | 2025-07-12 13:28:24.647302 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-07-12 13:28:24.647312 | orchestrator | Saturday 12 July 2025 13:27:45 +0000 (0:00:01.536) 0:05:46.028 ********* 2025-07-12 13:28:24.647323 | orchestrator | ok: [testbed-manager] 2025-07-12 13:28:24.647333 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:28:24.647343 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:28:24.647354 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:28:24.647364 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:28:24.647375 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:28:24.647385 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:28:24.647395 | orchestrator | 2025-07-12 13:28:24.647406 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-07-12 13:28:24.647416 | orchestrator | Saturday 12 July 2025 13:27:47 +0000 (0:00:01.303) 0:05:47.332 ********* 2025-07-12 13:28:24.647427 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:28:24.647451 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:28:24.647462 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:28:24.647500 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:28:24.647511 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:28:24.647522 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:28:24.647532 | orchestrator | changed: [testbed-manager] 2025-07-12 13:28:24.647542 | orchestrator | 2025-07-12 13:28:24.647553 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-07-12 13:28:24.647564 | orchestrator | Saturday 12 July 2025 13:27:47 +0000 (0:00:00.630) 0:05:47.962 ********* 2025-07-12 13:28:24.647574 | orchestrator | ok: [testbed-manager] 2025-07-12 13:28:24.647585 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:28:24.647595 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:28:24.647605 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:28:24.647616 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:28:24.647626 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:28:24.647636 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:28:24.647647 | orchestrator | 2025-07-12 13:28:24.647657 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-07-12 13:28:24.647668 | orchestrator | Saturday 12 July 2025 13:27:57 +0000 (0:00:09.691) 0:05:57.653 ********* 2025-07-12 13:28:24.647678 | orchestrator | changed: [testbed-manager] 2025-07-12 13:28:24.647706 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:28:24.647717 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:28:24.647727 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:28:24.647738 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:28:24.647748 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:28:24.647758 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:28:24.647769 | orchestrator | 2025-07-12 13:28:24.647779 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-07-12 13:28:24.647790 | orchestrator | Saturday 12 July 2025 13:27:58 +0000 (0:00:00.908) 0:05:58.561 ********* 2025-07-12 13:28:24.647800 | orchestrator | ok: [testbed-manager] 2025-07-12 13:28:24.647811 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:28:24.647821 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:28:24.647831 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:28:24.647842 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:28:24.647852 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:28:24.647862 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:28:24.647873 | orchestrator | 2025-07-12 13:28:24.647883 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-07-12 13:28:24.647894 | orchestrator | Saturday 12 July 2025 13:28:07 +0000 (0:00:08.950) 0:06:07.511 ********* 2025-07-12 13:28:24.647904 | orchestrator | ok: [testbed-manager] 2025-07-12 13:28:24.647924 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:28:24.647935 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:28:24.647945 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:28:24.647956 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:28:24.647966 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:28:24.647976 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:28:24.647987 | orchestrator | 2025-07-12 13:28:24.647997 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-07-12 13:28:24.648008 | orchestrator | Saturday 12 July 2025 13:28:18 +0000 (0:00:10.909) 0:06:18.421 ********* 2025-07-12 13:28:24.648019 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-07-12 13:28:24.648029 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-07-12 13:28:24.648040 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-07-12 13:28:24.648050 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-07-12 13:28:24.648060 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-07-12 13:28:24.648071 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-07-12 13:28:24.648081 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-07-12 13:28:24.648092 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-07-12 13:28:24.648102 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-07-12 13:28:24.648112 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-07-12 13:28:24.648123 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-07-12 13:28:24.648133 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-07-12 13:28:24.648144 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-07-12 13:28:24.648154 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-07-12 13:28:24.648164 | orchestrator | 2025-07-12 13:28:24.648175 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-07-12 13:28:24.648185 | orchestrator | Saturday 12 July 2025 13:28:19 +0000 (0:00:01.250) 0:06:19.672 ********* 2025-07-12 13:28:24.648196 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:28:24.648206 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:28:24.648217 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:28:24.648227 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:28:24.648237 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:28:24.648248 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:28:24.648258 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:28:24.648269 | orchestrator | 2025-07-12 13:28:24.648279 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-07-12 13:28:24.648290 | orchestrator | Saturday 12 July 2025 13:28:19 +0000 (0:00:00.523) 0:06:20.196 ********* 2025-07-12 13:28:24.648300 | orchestrator | ok: [testbed-manager] 2025-07-12 13:28:24.648311 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:28:24.648321 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:28:24.648331 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:28:24.648342 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:28:24.648352 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:28:24.648363 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:28:24.648373 | orchestrator | 2025-07-12 13:28:24.648384 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-07-12 13:28:24.648396 | orchestrator | Saturday 12 July 2025 13:28:23 +0000 (0:00:03.886) 0:06:24.082 ********* 2025-07-12 13:28:24.648406 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:28:24.648417 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:28:24.648427 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:28:24.648437 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:28:24.648448 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:28:24.648486 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:28:24.648500 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:28:24.648510 | orchestrator | 2025-07-12 13:28:24.648522 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-07-12 13:28:24.648540 | orchestrator | Saturday 12 July 2025 13:28:24 +0000 (0:00:00.506) 0:06:24.588 ********* 2025-07-12 13:28:24.648550 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-07-12 13:28:24.648561 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-07-12 13:28:24.648572 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:28:24.648582 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-07-12 13:28:24.648593 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-07-12 13:28:24.648603 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:28:24.648614 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-07-12 13:28:24.648624 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-07-12 13:28:24.648635 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:28:24.648646 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-07-12 13:28:24.648663 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-07-12 13:28:44.185838 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:28:44.185994 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-07-12 13:28:44.186104 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-07-12 13:28:44.186131 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:28:44.186151 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-07-12 13:28:44.186171 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-07-12 13:28:44.186189 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:28:44.186208 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-07-12 13:28:44.186226 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-07-12 13:28:44.186246 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:28:44.186265 | orchestrator | 2025-07-12 13:28:44.186288 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-07-12 13:28:44.186309 | orchestrator | Saturday 12 July 2025 13:28:24 +0000 (0:00:00.590) 0:06:25.179 ********* 2025-07-12 13:28:44.186329 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:28:44.186344 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:28:44.186357 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:28:44.186369 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:28:44.186380 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:28:44.186393 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:28:44.186405 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:28:44.186417 | orchestrator | 2025-07-12 13:28:44.186429 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-07-12 13:28:44.186442 | orchestrator | Saturday 12 July 2025 13:28:25 +0000 (0:00:00.520) 0:06:25.699 ********* 2025-07-12 13:28:44.186455 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:28:44.186467 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:28:44.186508 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:28:44.186521 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:28:44.186534 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:28:44.186546 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:28:44.186558 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:28:44.186570 | orchestrator | 2025-07-12 13:28:44.186582 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-07-12 13:28:44.186595 | orchestrator | Saturday 12 July 2025 13:28:25 +0000 (0:00:00.514) 0:06:26.214 ********* 2025-07-12 13:28:44.186606 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:28:44.186618 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:28:44.186630 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:28:44.186641 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:28:44.186654 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:28:44.186666 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:28:44.186702 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:28:44.186714 | orchestrator | 2025-07-12 13:28:44.186726 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-07-12 13:28:44.186737 | orchestrator | Saturday 12 July 2025 13:28:26 +0000 (0:00:00.760) 0:06:26.974 ********* 2025-07-12 13:28:44.186748 | orchestrator | ok: [testbed-manager] 2025-07-12 13:28:44.186758 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:28:44.186769 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:28:44.186779 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:28:44.186789 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:28:44.186799 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:28:44.186810 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:28:44.186820 | orchestrator | 2025-07-12 13:28:44.186830 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-07-12 13:28:44.186841 | orchestrator | Saturday 12 July 2025 13:28:28 +0000 (0:00:01.662) 0:06:28.636 ********* 2025-07-12 13:28:44.186852 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:28:44.186866 | orchestrator | 2025-07-12 13:28:44.186877 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-07-12 13:28:44.186888 | orchestrator | Saturday 12 July 2025 13:28:29 +0000 (0:00:00.879) 0:06:29.516 ********* 2025-07-12 13:28:44.186899 | orchestrator | ok: [testbed-manager] 2025-07-12 13:28:44.186909 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:28:44.186919 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:28:44.186930 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:28:44.186940 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:28:44.186950 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:28:44.186961 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:28:44.186971 | orchestrator | 2025-07-12 13:28:44.186981 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-07-12 13:28:44.186992 | orchestrator | Saturday 12 July 2025 13:28:30 +0000 (0:00:00.911) 0:06:30.428 ********* 2025-07-12 13:28:44.187003 | orchestrator | ok: [testbed-manager] 2025-07-12 13:28:44.187013 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:28:44.187024 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:28:44.187034 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:28:44.187044 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:28:44.187055 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:28:44.187065 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:28:44.187075 | orchestrator | 2025-07-12 13:28:44.187086 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-07-12 13:28:44.187096 | orchestrator | Saturday 12 July 2025 13:28:31 +0000 (0:00:01.104) 0:06:31.532 ********* 2025-07-12 13:28:44.187107 | orchestrator | ok: [testbed-manager] 2025-07-12 13:28:44.187117 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:28:44.187127 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:28:44.187137 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:28:44.187148 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:28:44.187176 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:28:44.187187 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:28:44.187198 | orchestrator | 2025-07-12 13:28:44.187208 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-07-12 13:28:44.187219 | orchestrator | Saturday 12 July 2025 13:28:32 +0000 (0:00:01.348) 0:06:32.881 ********* 2025-07-12 13:28:44.187252 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:28:44.187264 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:28:44.187274 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:28:44.187285 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:28:44.187295 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:28:44.187305 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:28:44.187331 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:28:44.187351 | orchestrator | 2025-07-12 13:28:44.187362 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-07-12 13:28:44.187372 | orchestrator | Saturday 12 July 2025 13:28:34 +0000 (0:00:01.415) 0:06:34.296 ********* 2025-07-12 13:28:44.187383 | orchestrator | ok: [testbed-manager] 2025-07-12 13:28:44.187394 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:28:44.187404 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:28:44.187415 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:28:44.187425 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:28:44.187436 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:28:44.187446 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:28:44.187456 | orchestrator | 2025-07-12 13:28:44.187467 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-07-12 13:28:44.187501 | orchestrator | Saturday 12 July 2025 13:28:35 +0000 (0:00:01.347) 0:06:35.644 ********* 2025-07-12 13:28:44.187522 | orchestrator | changed: [testbed-manager] 2025-07-12 13:28:44.187541 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:28:44.187560 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:28:44.187575 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:28:44.187586 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:28:44.187596 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:28:44.187606 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:28:44.187617 | orchestrator | 2025-07-12 13:28:44.187627 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-07-12 13:28:44.187638 | orchestrator | Saturday 12 July 2025 13:28:36 +0000 (0:00:01.468) 0:06:37.112 ********* 2025-07-12 13:28:44.187649 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:28:44.187660 | orchestrator | 2025-07-12 13:28:44.187670 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-07-12 13:28:44.187681 | orchestrator | Saturday 12 July 2025 13:28:37 +0000 (0:00:01.119) 0:06:38.231 ********* 2025-07-12 13:28:44.187691 | orchestrator | ok: [testbed-manager] 2025-07-12 13:28:44.187702 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:28:44.187712 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:28:44.187722 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:28:44.187733 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:28:44.187743 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:28:44.187753 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:28:44.187764 | orchestrator | 2025-07-12 13:28:44.187774 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-07-12 13:28:44.187785 | orchestrator | Saturday 12 July 2025 13:28:39 +0000 (0:00:01.420) 0:06:39.652 ********* 2025-07-12 13:28:44.187796 | orchestrator | ok: [testbed-manager] 2025-07-12 13:28:44.187806 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:28:44.187816 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:28:44.187827 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:28:44.187837 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:28:44.187847 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:28:44.187858 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:28:44.187868 | orchestrator | 2025-07-12 13:28:44.187879 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-07-12 13:28:44.187890 | orchestrator | Saturday 12 July 2025 13:28:40 +0000 (0:00:01.107) 0:06:40.759 ********* 2025-07-12 13:28:44.187900 | orchestrator | ok: [testbed-manager] 2025-07-12 13:28:44.187911 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:28:44.187921 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:28:44.187931 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:28:44.187942 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:28:44.187952 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:28:44.187963 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:28:44.187973 | orchestrator | 2025-07-12 13:28:44.187984 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-07-12 13:28:44.188005 | orchestrator | Saturday 12 July 2025 13:28:41 +0000 (0:00:01.341) 0:06:42.101 ********* 2025-07-12 13:28:44.188016 | orchestrator | ok: [testbed-manager] 2025-07-12 13:28:44.188026 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:28:44.188037 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:28:44.188047 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:28:44.188057 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:28:44.188067 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:28:44.188078 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:28:44.188088 | orchestrator | 2025-07-12 13:28:44.188099 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-07-12 13:28:44.188109 | orchestrator | Saturday 12 July 2025 13:28:42 +0000 (0:00:01.146) 0:06:43.248 ********* 2025-07-12 13:28:44.188127 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:28:44.188138 | orchestrator | 2025-07-12 13:28:44.188148 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 13:28:44.188159 | orchestrator | Saturday 12 July 2025 13:28:43 +0000 (0:00:00.899) 0:06:44.148 ********* 2025-07-12 13:28:44.188169 | orchestrator | 2025-07-12 13:28:44.188180 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 13:28:44.188191 | orchestrator | Saturday 12 July 2025 13:28:43 +0000 (0:00:00.039) 0:06:44.187 ********* 2025-07-12 13:28:44.188201 | orchestrator | 2025-07-12 13:28:44.188212 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 13:28:44.188222 | orchestrator | Saturday 12 July 2025 13:28:43 +0000 (0:00:00.043) 0:06:44.230 ********* 2025-07-12 13:28:44.188233 | orchestrator | 2025-07-12 13:28:44.188243 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 13:28:44.188254 | orchestrator | Saturday 12 July 2025 13:28:43 +0000 (0:00:00.037) 0:06:44.268 ********* 2025-07-12 13:28:44.188264 | orchestrator | 2025-07-12 13:28:44.188284 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 13:29:10.530838 | orchestrator | Saturday 12 July 2025 13:28:44 +0000 (0:00:00.038) 0:06:44.306 ********* 2025-07-12 13:29:10.530958 | orchestrator | 2025-07-12 13:29:10.530974 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 13:29:10.530986 | orchestrator | Saturday 12 July 2025 13:28:44 +0000 (0:00:00.044) 0:06:44.351 ********* 2025-07-12 13:29:10.530997 | orchestrator | 2025-07-12 13:29:10.531008 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 13:29:10.531019 | orchestrator | Saturday 12 July 2025 13:28:44 +0000 (0:00:00.037) 0:06:44.389 ********* 2025-07-12 13:29:10.531029 | orchestrator | 2025-07-12 13:29:10.531040 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-07-12 13:29:10.531051 | orchestrator | Saturday 12 July 2025 13:28:44 +0000 (0:00:00.041) 0:06:44.431 ********* 2025-07-12 13:29:10.531061 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:29:10.531073 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:29:10.531084 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:29:10.531094 | orchestrator | 2025-07-12 13:29:10.531105 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-07-12 13:29:10.531116 | orchestrator | Saturday 12 July 2025 13:28:45 +0000 (0:00:01.326) 0:06:45.757 ********* 2025-07-12 13:29:10.531127 | orchestrator | changed: [testbed-manager] 2025-07-12 13:29:10.531138 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:29:10.531149 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:29:10.531159 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:29:10.531170 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:29:10.531180 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:29:10.531191 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:29:10.531201 | orchestrator | 2025-07-12 13:29:10.531212 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-07-12 13:29:10.531252 | orchestrator | Saturday 12 July 2025 13:28:46 +0000 (0:00:01.337) 0:06:47.094 ********* 2025-07-12 13:29:10.531263 | orchestrator | changed: [testbed-manager] 2025-07-12 13:29:10.531274 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:29:10.531284 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:29:10.531295 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:29:10.531305 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:29:10.531315 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:29:10.531326 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:29:10.531336 | orchestrator | 2025-07-12 13:29:10.531347 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-07-12 13:29:10.531357 | orchestrator | Saturday 12 July 2025 13:28:47 +0000 (0:00:01.173) 0:06:48.268 ********* 2025-07-12 13:29:10.531368 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:29:10.531381 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:29:10.531392 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:29:10.531404 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:29:10.531415 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:29:10.531426 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:29:10.531438 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:29:10.531451 | orchestrator | 2025-07-12 13:29:10.531463 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-07-12 13:29:10.531475 | orchestrator | Saturday 12 July 2025 13:28:50 +0000 (0:00:02.476) 0:06:50.745 ********* 2025-07-12 13:29:10.531487 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:29:10.531548 | orchestrator | 2025-07-12 13:29:10.531561 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-07-12 13:29:10.531573 | orchestrator | Saturday 12 July 2025 13:28:50 +0000 (0:00:00.101) 0:06:50.847 ********* 2025-07-12 13:29:10.531585 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:10.531597 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:29:10.531610 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:29:10.531621 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:29:10.531633 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:29:10.531645 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:29:10.531656 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:29:10.531668 | orchestrator | 2025-07-12 13:29:10.531681 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-07-12 13:29:10.531694 | orchestrator | Saturday 12 July 2025 13:28:51 +0000 (0:00:00.985) 0:06:51.832 ********* 2025-07-12 13:29:10.531707 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:29:10.531719 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:29:10.531730 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:29:10.531741 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:29:10.531751 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:29:10.531761 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:29:10.531772 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:29:10.531782 | orchestrator | 2025-07-12 13:29:10.531793 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-07-12 13:29:10.531804 | orchestrator | Saturday 12 July 2025 13:28:52 +0000 (0:00:00.776) 0:06:52.609 ********* 2025-07-12 13:29:10.531829 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:29:10.531843 | orchestrator | 2025-07-12 13:29:10.531854 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-07-12 13:29:10.531864 | orchestrator | Saturday 12 July 2025 13:28:53 +0000 (0:00:00.924) 0:06:53.534 ********* 2025-07-12 13:29:10.531875 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:10.531886 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:29:10.531896 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:29:10.531906 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:29:10.531917 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:29:10.531937 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:29:10.531948 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:29:10.531958 | orchestrator | 2025-07-12 13:29:10.531969 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-07-12 13:29:10.531979 | orchestrator | Saturday 12 July 2025 13:28:54 +0000 (0:00:00.857) 0:06:54.391 ********* 2025-07-12 13:29:10.531990 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-07-12 13:29:10.532001 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-07-12 13:29:10.532029 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-07-12 13:29:10.532041 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-07-12 13:29:10.532051 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-07-12 13:29:10.532062 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-07-12 13:29:10.532073 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-07-12 13:29:10.532083 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-07-12 13:29:10.532094 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-07-12 13:29:10.532105 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-07-12 13:29:10.532115 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-07-12 13:29:10.532126 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-07-12 13:29:10.532136 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-07-12 13:29:10.532146 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-07-12 13:29:10.532157 | orchestrator | 2025-07-12 13:29:10.532168 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-07-12 13:29:10.532178 | orchestrator | Saturday 12 July 2025 13:28:56 +0000 (0:00:02.716) 0:06:57.107 ********* 2025-07-12 13:29:10.532189 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:29:10.532200 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:29:10.532210 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:29:10.532221 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:29:10.532231 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:29:10.532242 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:29:10.532252 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:29:10.532263 | orchestrator | 2025-07-12 13:29:10.532273 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-07-12 13:29:10.532284 | orchestrator | Saturday 12 July 2025 13:28:57 +0000 (0:00:00.504) 0:06:57.611 ********* 2025-07-12 13:29:10.532297 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:29:10.532309 | orchestrator | 2025-07-12 13:29:10.532320 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-07-12 13:29:10.532330 | orchestrator | Saturday 12 July 2025 13:28:58 +0000 (0:00:00.842) 0:06:58.454 ********* 2025-07-12 13:29:10.532340 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:10.532351 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:29:10.532361 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:29:10.532372 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:29:10.532382 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:29:10.532393 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:29:10.532403 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:29:10.532414 | orchestrator | 2025-07-12 13:29:10.532424 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-07-12 13:29:10.532435 | orchestrator | Saturday 12 July 2025 13:28:59 +0000 (0:00:01.132) 0:06:59.586 ********* 2025-07-12 13:29:10.532445 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:10.532456 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:29:10.532466 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:29:10.532476 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:29:10.532518 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:29:10.532530 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:29:10.532541 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:29:10.532551 | orchestrator | 2025-07-12 13:29:10.532562 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-07-12 13:29:10.532573 | orchestrator | Saturday 12 July 2025 13:29:00 +0000 (0:00:00.871) 0:07:00.458 ********* 2025-07-12 13:29:10.532583 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:29:10.532594 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:29:10.532604 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:29:10.532615 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:29:10.532625 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:29:10.532635 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:29:10.532646 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:29:10.532656 | orchestrator | 2025-07-12 13:29:10.532667 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-07-12 13:29:10.532677 | orchestrator | Saturday 12 July 2025 13:29:00 +0000 (0:00:00.521) 0:07:00.979 ********* 2025-07-12 13:29:10.532688 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:10.532698 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:29:10.532708 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:29:10.532719 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:29:10.532729 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:29:10.532740 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:29:10.532750 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:29:10.532760 | orchestrator | 2025-07-12 13:29:10.532776 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-07-12 13:29:10.532787 | orchestrator | Saturday 12 July 2025 13:29:02 +0000 (0:00:01.492) 0:07:02.472 ********* 2025-07-12 13:29:10.532798 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:29:10.532808 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:29:10.532818 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:29:10.532829 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:29:10.532839 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:29:10.532849 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:29:10.532860 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:29:10.532870 | orchestrator | 2025-07-12 13:29:10.532881 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-07-12 13:29:10.532892 | orchestrator | Saturday 12 July 2025 13:29:02 +0000 (0:00:00.500) 0:07:02.973 ********* 2025-07-12 13:29:10.532902 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:10.532913 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:29:10.532923 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:29:10.532933 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:29:10.532944 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:29:10.532954 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:29:10.532965 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:29:10.532975 | orchestrator | 2025-07-12 13:29:10.532992 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-07-12 13:29:42.459812 | orchestrator | Saturday 12 July 2025 13:29:10 +0000 (0:00:07.806) 0:07:10.779 ********* 2025-07-12 13:29:42.459955 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:42.459973 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:29:42.459985 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:29:42.460010 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:29:42.460796 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:29:42.460819 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:29:42.460830 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:29:42.460841 | orchestrator | 2025-07-12 13:29:42.460853 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-07-12 13:29:42.460865 | orchestrator | Saturday 12 July 2025 13:29:11 +0000 (0:00:01.334) 0:07:12.114 ********* 2025-07-12 13:29:42.460876 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:42.460887 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:29:42.460922 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:29:42.460934 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:29:42.460944 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:29:42.460954 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:29:42.460965 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:29:42.460975 | orchestrator | 2025-07-12 13:29:42.460986 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-07-12 13:29:42.460996 | orchestrator | Saturday 12 July 2025 13:29:13 +0000 (0:00:01.729) 0:07:13.843 ********* 2025-07-12 13:29:42.461007 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:42.461018 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:29:42.461028 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:29:42.461038 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:29:42.461048 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:29:42.461059 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:29:42.461069 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:29:42.461079 | orchestrator | 2025-07-12 13:29:42.461090 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-12 13:29:42.461100 | orchestrator | Saturday 12 July 2025 13:29:15 +0000 (0:00:01.684) 0:07:15.528 ********* 2025-07-12 13:29:42.461111 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:42.461121 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:29:42.461132 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:29:42.461142 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:29:42.461152 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:29:42.461162 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:29:42.461173 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:29:42.461183 | orchestrator | 2025-07-12 13:29:42.461193 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-12 13:29:42.461206 | orchestrator | Saturday 12 July 2025 13:29:16 +0000 (0:00:01.115) 0:07:16.644 ********* 2025-07-12 13:29:42.461225 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:29:42.461244 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:29:42.461263 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:29:42.461281 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:29:42.461294 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:29:42.461305 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:29:42.461315 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:29:42.461325 | orchestrator | 2025-07-12 13:29:42.461336 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-07-12 13:29:42.461347 | orchestrator | Saturday 12 July 2025 13:29:17 +0000 (0:00:00.799) 0:07:17.444 ********* 2025-07-12 13:29:42.461357 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:29:42.461368 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:29:42.461378 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:29:42.461389 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:29:42.461399 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:29:42.461409 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:29:42.461419 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:29:42.461430 | orchestrator | 2025-07-12 13:29:42.461441 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-07-12 13:29:42.461451 | orchestrator | Saturday 12 July 2025 13:29:17 +0000 (0:00:00.585) 0:07:18.030 ********* 2025-07-12 13:29:42.461462 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:42.461472 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:29:42.461483 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:29:42.461493 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:29:42.461527 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:29:42.461540 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:29:42.461551 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:29:42.461561 | orchestrator | 2025-07-12 13:29:42.461572 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-07-12 13:29:42.461582 | orchestrator | Saturday 12 July 2025 13:29:18 +0000 (0:00:00.783) 0:07:18.813 ********* 2025-07-12 13:29:42.461605 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:42.461615 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:29:42.461626 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:29:42.461636 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:29:42.461646 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:29:42.461657 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:29:42.461682 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:29:42.461693 | orchestrator | 2025-07-12 13:29:42.461704 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-07-12 13:29:42.461717 | orchestrator | Saturday 12 July 2025 13:29:19 +0000 (0:00:00.536) 0:07:19.349 ********* 2025-07-12 13:29:42.461736 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:42.461756 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:29:42.461772 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:29:42.461783 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:29:42.461793 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:29:42.461803 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:29:42.461814 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:29:42.461824 | orchestrator | 2025-07-12 13:29:42.461834 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-07-12 13:29:42.461845 | orchestrator | Saturday 12 July 2025 13:29:19 +0000 (0:00:00.545) 0:07:19.895 ********* 2025-07-12 13:29:42.461856 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:42.461866 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:29:42.461877 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:29:42.461887 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:29:42.461898 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:29:42.461908 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:29:42.461918 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:29:42.461929 | orchestrator | 2025-07-12 13:29:42.461939 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-07-12 13:29:42.461971 | orchestrator | Saturday 12 July 2025 13:29:25 +0000 (0:00:05.751) 0:07:25.647 ********* 2025-07-12 13:29:42.461982 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:29:42.461993 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:29:42.462003 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:29:42.462014 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:29:42.462078 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:29:42.462089 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:29:42.462100 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:29:42.462110 | orchestrator | 2025-07-12 13:29:42.462121 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-07-12 13:29:42.462132 | orchestrator | Saturday 12 July 2025 13:29:25 +0000 (0:00:00.565) 0:07:26.212 ********* 2025-07-12 13:29:42.462146 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:29:42.462160 | orchestrator | 2025-07-12 13:29:42.462171 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-07-12 13:29:42.462181 | orchestrator | Saturday 12 July 2025 13:29:26 +0000 (0:00:01.059) 0:07:27.272 ********* 2025-07-12 13:29:42.462192 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:42.462202 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:29:42.462213 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:29:42.462223 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:29:42.462234 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:29:42.462244 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:29:42.462255 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:29:42.462265 | orchestrator | 2025-07-12 13:29:42.462276 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-07-12 13:29:42.462286 | orchestrator | Saturday 12 July 2025 13:29:28 +0000 (0:00:01.809) 0:07:29.082 ********* 2025-07-12 13:29:42.462297 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:42.462308 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:29:42.462374 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:29:42.462395 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:29:42.462406 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:29:42.462416 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:29:42.462426 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:29:42.462437 | orchestrator | 2025-07-12 13:29:42.462447 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-07-12 13:29:42.462458 | orchestrator | Saturday 12 July 2025 13:29:29 +0000 (0:00:01.164) 0:07:30.246 ********* 2025-07-12 13:29:42.462468 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:42.462479 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:29:42.462489 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:29:42.462499 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:29:42.462553 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:29:42.462565 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:29:42.462575 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:29:42.462585 | orchestrator | 2025-07-12 13:29:42.462596 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-07-12 13:29:42.462607 | orchestrator | Saturday 12 July 2025 13:29:31 +0000 (0:00:01.077) 0:07:31.324 ********* 2025-07-12 13:29:42.462618 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 13:29:42.462630 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 13:29:42.462641 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 13:29:42.462652 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 13:29:42.462663 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 13:29:42.462673 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 13:29:42.462683 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 13:29:42.462694 | orchestrator | 2025-07-12 13:29:42.462704 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-07-12 13:29:42.462716 | orchestrator | Saturday 12 July 2025 13:29:32 +0000 (0:00:01.806) 0:07:33.130 ********* 2025-07-12 13:29:42.462727 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:29:42.462738 | orchestrator | 2025-07-12 13:29:42.462748 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-07-12 13:29:42.462759 | orchestrator | Saturday 12 July 2025 13:29:33 +0000 (0:00:00.823) 0:07:33.954 ********* 2025-07-12 13:29:42.462770 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:29:42.462780 | orchestrator | changed: [testbed-manager] 2025-07-12 13:29:42.462790 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:29:42.462801 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:29:42.462811 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:29:42.462821 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:29:42.462832 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:29:42.462842 | orchestrator | 2025-07-12 13:29:42.462853 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-07-12 13:29:42.462873 | orchestrator | Saturday 12 July 2025 13:29:42 +0000 (0:00:08.757) 0:07:42.712 ********* 2025-07-12 13:29:58.773506 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:58.773656 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:29:58.773672 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:29:58.773712 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:29:58.773724 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:29:58.773734 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:29:58.773745 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:29:58.773756 | orchestrator | 2025-07-12 13:29:58.773769 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-07-12 13:29:58.773781 | orchestrator | Saturday 12 July 2025 13:29:44 +0000 (0:00:01.728) 0:07:44.441 ********* 2025-07-12 13:29:58.773792 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:29:58.773848 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:29:58.773861 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:29:58.773872 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:29:58.773882 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:29:58.773892 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:29:58.773903 | orchestrator | 2025-07-12 13:29:58.773914 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-07-12 13:29:58.773925 | orchestrator | Saturday 12 July 2025 13:29:45 +0000 (0:00:01.292) 0:07:45.733 ********* 2025-07-12 13:29:58.773936 | orchestrator | changed: [testbed-manager] 2025-07-12 13:29:58.773948 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:29:58.773958 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:29:58.773969 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:29:58.773979 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:29:58.773990 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:29:58.774000 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:29:58.774011 | orchestrator | 2025-07-12 13:29:58.774076 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-07-12 13:29:58.774089 | orchestrator | 2025-07-12 13:29:58.774106 | orchestrator | TASK [Include hardening role] ************************************************** 2025-07-12 13:29:58.774125 | orchestrator | Saturday 12 July 2025 13:29:46 +0000 (0:00:01.500) 0:07:47.234 ********* 2025-07-12 13:29:58.774138 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:29:58.774149 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:29:58.774161 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:29:58.774173 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:29:58.774185 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:29:58.774197 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:29:58.774208 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:29:58.774220 | orchestrator | 2025-07-12 13:29:58.774232 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-07-12 13:29:58.774245 | orchestrator | 2025-07-12 13:29:58.774256 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-07-12 13:29:58.774268 | orchestrator | Saturday 12 July 2025 13:29:47 +0000 (0:00:00.532) 0:07:47.766 ********* 2025-07-12 13:29:58.774280 | orchestrator | changed: [testbed-manager] 2025-07-12 13:29:58.774292 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:29:58.774304 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:29:58.774316 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:29:58.774327 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:29:58.774339 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:29:58.774351 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:29:58.774363 | orchestrator | 2025-07-12 13:29:58.774375 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-07-12 13:29:58.774385 | orchestrator | Saturday 12 July 2025 13:29:48 +0000 (0:00:01.440) 0:07:49.206 ********* 2025-07-12 13:29:58.774396 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:58.774406 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:29:58.774417 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:29:58.774427 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:29:58.774437 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:29:58.774448 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:29:58.774458 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:29:58.774468 | orchestrator | 2025-07-12 13:29:58.774479 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-07-12 13:29:58.774499 | orchestrator | Saturday 12 July 2025 13:29:50 +0000 (0:00:01.431) 0:07:50.638 ********* 2025-07-12 13:29:58.774528 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:29:58.774539 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:29:58.774550 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:29:58.774561 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:29:58.774571 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:29:58.774582 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:29:58.774592 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:29:58.774603 | orchestrator | 2025-07-12 13:29:58.774613 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-07-12 13:29:58.774624 | orchestrator | Saturday 12 July 2025 13:29:51 +0000 (0:00:01.036) 0:07:51.674 ********* 2025-07-12 13:29:58.774634 | orchestrator | changed: [testbed-manager] 2025-07-12 13:29:58.774645 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:29:58.774655 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:29:58.774666 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:29:58.774682 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:29:58.774693 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:29:58.774703 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:29:58.774714 | orchestrator | 2025-07-12 13:29:58.774724 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-07-12 13:29:58.774735 | orchestrator | 2025-07-12 13:29:58.774746 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-07-12 13:29:58.774756 | orchestrator | Saturday 12 July 2025 13:29:52 +0000 (0:00:01.300) 0:07:52.974 ********* 2025-07-12 13:29:58.774767 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:29:58.774779 | orchestrator | 2025-07-12 13:29:58.774790 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-07-12 13:29:58.774800 | orchestrator | Saturday 12 July 2025 13:29:53 +0000 (0:00:00.963) 0:07:53.938 ********* 2025-07-12 13:29:58.774811 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:58.774821 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:29:58.774832 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:29:58.774842 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:29:58.774853 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:29:58.774863 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:29:58.774873 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:29:58.774884 | orchestrator | 2025-07-12 13:29:58.774915 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-07-12 13:29:58.774927 | orchestrator | Saturday 12 July 2025 13:29:54 +0000 (0:00:00.852) 0:07:54.791 ********* 2025-07-12 13:29:58.774937 | orchestrator | changed: [testbed-manager] 2025-07-12 13:29:58.774948 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:29:58.774958 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:29:58.774968 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:29:58.774979 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:29:58.774989 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:29:58.774999 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:29:58.775010 | orchestrator | 2025-07-12 13:29:58.775020 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-07-12 13:29:58.775031 | orchestrator | Saturday 12 July 2025 13:29:55 +0000 (0:00:01.165) 0:07:55.956 ********* 2025-07-12 13:29:58.775042 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:29:58.775052 | orchestrator | 2025-07-12 13:29:58.775063 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-07-12 13:29:58.775073 | orchestrator | Saturday 12 July 2025 13:29:56 +0000 (0:00:01.068) 0:07:57.024 ********* 2025-07-12 13:29:58.775084 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:58.775094 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:29:58.775112 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:29:58.775123 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:29:58.775133 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:29:58.775143 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:29:58.775154 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:29:58.775164 | orchestrator | 2025-07-12 13:29:58.775175 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-07-12 13:29:58.775185 | orchestrator | Saturday 12 July 2025 13:29:57 +0000 (0:00:00.865) 0:07:57.889 ********* 2025-07-12 13:29:58.775196 | orchestrator | changed: [testbed-manager] 2025-07-12 13:29:58.775206 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:29:58.775217 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:29:58.775227 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:29:58.775238 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:29:58.775248 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:29:58.775258 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:29:58.775269 | orchestrator | 2025-07-12 13:29:58.775279 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:29:58.775291 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-07-12 13:29:58.775302 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-07-12 13:29:58.775313 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-12 13:29:58.775324 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-12 13:29:58.775334 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-12 13:29:58.775345 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-12 13:29:58.775355 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-12 13:29:58.775366 | orchestrator | 2025-07-12 13:29:58.775377 | orchestrator | 2025-07-12 13:29:58.775387 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:29:58.775398 | orchestrator | Saturday 12 July 2025 13:29:58 +0000 (0:00:01.128) 0:07:59.018 ********* 2025-07-12 13:29:58.775409 | orchestrator | =============================================================================== 2025-07-12 13:29:58.775419 | orchestrator | osism.commons.packages : Install required packages --------------------- 76.36s 2025-07-12 13:29:58.775430 | orchestrator | osism.commons.packages : Download required packages -------------------- 40.22s 2025-07-12 13:29:58.775440 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.88s 2025-07-12 13:29:58.775451 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.25s 2025-07-12 13:29:58.775461 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.17s 2025-07-12 13:29:58.775472 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.66s 2025-07-12 13:29:58.775483 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.91s 2025-07-12 13:29:58.775493 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.69s 2025-07-12 13:29:58.775504 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.95s 2025-07-12 13:29:58.775545 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.78s 2025-07-12 13:29:58.775556 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.76s 2025-07-12 13:29:58.775575 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.57s 2025-07-12 13:29:58.775585 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.51s 2025-07-12 13:29:58.775596 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.93s 2025-07-12 13:29:58.775615 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.81s 2025-07-12 13:29:59.281241 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.52s 2025-07-12 13:29:59.281343 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.14s 2025-07-12 13:29:59.281358 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.14s 2025-07-12 13:29:59.281369 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.83s 2025-07-12 13:29:59.281380 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.75s 2025-07-12 13:29:59.583314 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-07-12 13:29:59.583400 | orchestrator | + osism apply network 2025-07-12 13:30:12.176769 | orchestrator | 2025-07-12 13:30:12 | INFO  | Task 80d2b87a-f714-49c7-8d7e-a3d6de7a5049 (network) was prepared for execution. 2025-07-12 13:30:12.176867 | orchestrator | 2025-07-12 13:30:12 | INFO  | It takes a moment until task 80d2b87a-f714-49c7-8d7e-a3d6de7a5049 (network) has been started and output is visible here. 2025-07-12 13:30:41.338594 | orchestrator | 2025-07-12 13:30:41.338715 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-07-12 13:30:41.338732 | orchestrator | 2025-07-12 13:30:41.338744 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-07-12 13:30:41.338755 | orchestrator | Saturday 12 July 2025 13:30:16 +0000 (0:00:00.289) 0:00:00.289 ********* 2025-07-12 13:30:41.338767 | orchestrator | ok: [testbed-manager] 2025-07-12 13:30:41.338779 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:30:41.338790 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:30:41.338801 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:30:41.338811 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:30:41.338822 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:30:41.338832 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:30:41.338843 | orchestrator | 2025-07-12 13:30:41.338854 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-07-12 13:30:41.338864 | orchestrator | Saturday 12 July 2025 13:30:17 +0000 (0:00:00.710) 0:00:00.999 ********* 2025-07-12 13:30:41.338878 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:30:41.338891 | orchestrator | 2025-07-12 13:30:41.338902 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-07-12 13:30:41.338913 | orchestrator | Saturday 12 July 2025 13:30:18 +0000 (0:00:01.237) 0:00:02.237 ********* 2025-07-12 13:30:41.338923 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:30:41.338934 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:30:41.338945 | orchestrator | ok: [testbed-manager] 2025-07-12 13:30:41.338955 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:30:41.338966 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:30:41.338976 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:30:41.338987 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:30:41.338997 | orchestrator | 2025-07-12 13:30:41.339008 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-07-12 13:30:41.339018 | orchestrator | Saturday 12 July 2025 13:30:20 +0000 (0:00:01.894) 0:00:04.131 ********* 2025-07-12 13:30:41.339029 | orchestrator | ok: [testbed-manager] 2025-07-12 13:30:41.339039 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:30:41.339050 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:30:41.339060 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:30:41.339070 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:30:41.339081 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:30:41.339118 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:30:41.339131 | orchestrator | 2025-07-12 13:30:41.339144 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-07-12 13:30:41.339156 | orchestrator | Saturday 12 July 2025 13:30:22 +0000 (0:00:01.932) 0:00:06.064 ********* 2025-07-12 13:30:41.339168 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-07-12 13:30:41.339181 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-07-12 13:30:41.339193 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-07-12 13:30:41.339206 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-07-12 13:30:41.339218 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-07-12 13:30:41.339230 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-07-12 13:30:41.339241 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-07-12 13:30:41.339253 | orchestrator | 2025-07-12 13:30:41.339265 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-07-12 13:30:41.339293 | orchestrator | Saturday 12 July 2025 13:30:23 +0000 (0:00:00.982) 0:00:07.046 ********* 2025-07-12 13:30:41.339305 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 13:30:41.339318 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-12 13:30:41.339329 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 13:30:41.339341 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-12 13:30:41.339353 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-12 13:30:41.339365 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-12 13:30:41.339377 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-12 13:30:41.339389 | orchestrator | 2025-07-12 13:30:41.339400 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-07-12 13:30:41.339412 | orchestrator | Saturday 12 July 2025 13:30:26 +0000 (0:00:03.585) 0:00:10.631 ********* 2025-07-12 13:30:41.339425 | orchestrator | changed: [testbed-manager] 2025-07-12 13:30:41.339438 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:30:41.339449 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:30:41.339459 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:30:41.339469 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:30:41.339479 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:30:41.339490 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:30:41.339500 | orchestrator | 2025-07-12 13:30:41.339511 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-07-12 13:30:41.339521 | orchestrator | Saturday 12 July 2025 13:30:28 +0000 (0:00:01.457) 0:00:12.089 ********* 2025-07-12 13:30:41.339557 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 13:30:41.339568 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 13:30:41.339578 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-12 13:30:41.339588 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-12 13:30:41.339599 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-12 13:30:41.339609 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-12 13:30:41.339620 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-12 13:30:41.339630 | orchestrator | 2025-07-12 13:30:41.339641 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-07-12 13:30:41.339652 | orchestrator | Saturday 12 July 2025 13:30:30 +0000 (0:00:01.944) 0:00:14.033 ********* 2025-07-12 13:30:41.339662 | orchestrator | ok: [testbed-manager] 2025-07-12 13:30:41.339673 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:30:41.339683 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:30:41.339694 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:30:41.339704 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:30:41.339715 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:30:41.339725 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:30:41.339736 | orchestrator | 2025-07-12 13:30:41.339746 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-07-12 13:30:41.339775 | orchestrator | Saturday 12 July 2025 13:30:31 +0000 (0:00:01.173) 0:00:15.207 ********* 2025-07-12 13:30:41.339795 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:30:41.339806 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:30:41.339816 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:30:41.339827 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:30:41.339838 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:30:41.339848 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:30:41.339859 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:30:41.339869 | orchestrator | 2025-07-12 13:30:41.339880 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-07-12 13:30:41.339890 | orchestrator | Saturday 12 July 2025 13:30:32 +0000 (0:00:00.665) 0:00:15.873 ********* 2025-07-12 13:30:41.339901 | orchestrator | ok: [testbed-manager] 2025-07-12 13:30:41.339912 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:30:41.339922 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:30:41.339933 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:30:41.339943 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:30:41.339954 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:30:41.339964 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:30:41.339975 | orchestrator | 2025-07-12 13:30:41.339985 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-07-12 13:30:41.339996 | orchestrator | Saturday 12 July 2025 13:30:34 +0000 (0:00:02.148) 0:00:18.021 ********* 2025-07-12 13:30:41.340007 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:30:41.340017 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:30:41.340028 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:30:41.340038 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:30:41.340048 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:30:41.340059 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:30:41.340070 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-07-12 13:30:41.340082 | orchestrator | 2025-07-12 13:30:41.340093 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-07-12 13:30:41.340104 | orchestrator | Saturday 12 July 2025 13:30:35 +0000 (0:00:00.931) 0:00:18.952 ********* 2025-07-12 13:30:41.340114 | orchestrator | ok: [testbed-manager] 2025-07-12 13:30:41.340125 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:30:41.340135 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:30:41.340146 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:30:41.340156 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:30:41.340167 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:30:41.340177 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:30:41.340188 | orchestrator | 2025-07-12 13:30:41.340198 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-07-12 13:30:41.340209 | orchestrator | Saturday 12 July 2025 13:30:36 +0000 (0:00:01.683) 0:00:20.636 ********* 2025-07-12 13:30:41.340220 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:30:41.340232 | orchestrator | 2025-07-12 13:30:41.340243 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-07-12 13:30:41.340254 | orchestrator | Saturday 12 July 2025 13:30:38 +0000 (0:00:01.284) 0:00:21.921 ********* 2025-07-12 13:30:41.340264 | orchestrator | ok: [testbed-manager] 2025-07-12 13:30:41.340275 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:30:41.340285 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:30:41.340296 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:30:41.340312 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:30:41.340322 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:30:41.340333 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:30:41.340343 | orchestrator | 2025-07-12 13:30:41.340354 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-07-12 13:30:41.340365 | orchestrator | Saturday 12 July 2025 13:30:39 +0000 (0:00:01.028) 0:00:22.950 ********* 2025-07-12 13:30:41.340381 | orchestrator | ok: [testbed-manager] 2025-07-12 13:30:41.340392 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:30:41.340402 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:30:41.340413 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:30:41.340423 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:30:41.340434 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:30:41.340444 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:30:41.340454 | orchestrator | 2025-07-12 13:30:41.340465 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-07-12 13:30:41.340476 | orchestrator | Saturday 12 July 2025 13:30:40 +0000 (0:00:00.851) 0:00:23.802 ********* 2025-07-12 13:30:41.340486 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 13:30:41.340497 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 13:30:41.340508 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 13:30:41.340518 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 13:30:41.340556 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 13:30:41.340567 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 13:30:41.340578 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 13:30:41.340588 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 13:30:41.340599 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 13:30:41.340609 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 13:30:41.340620 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 13:30:41.340630 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 13:30:41.340640 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 13:30:41.340651 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 13:30:41.340661 | orchestrator | 2025-07-12 13:30:41.340679 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-07-12 13:30:58.737382 | orchestrator | Saturday 12 July 2025 13:30:41 +0000 (0:00:01.194) 0:00:24.996 ********* 2025-07-12 13:30:58.737502 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:30:58.737520 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:30:58.737605 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:30:58.737626 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:30:58.737644 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:30:58.737662 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:30:58.737680 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:30:58.737698 | orchestrator | 2025-07-12 13:30:58.737719 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-07-12 13:30:58.737739 | orchestrator | Saturday 12 July 2025 13:30:41 +0000 (0:00:00.626) 0:00:25.622 ********* 2025-07-12 13:30:58.737753 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-2, testbed-manager, testbed-node-3, testbed-node-0, testbed-node-1, testbed-node-4, testbed-node-5 2025-07-12 13:30:58.737767 | orchestrator | 2025-07-12 13:30:58.737778 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-07-12 13:30:58.737788 | orchestrator | Saturday 12 July 2025 13:30:46 +0000 (0:00:04.599) 0:00:30.222 ********* 2025-07-12 13:30:58.737802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-07-12 13:30:58.737813 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-07-12 13:30:58.737852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-07-12 13:30:58.737865 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-07-12 13:30:58.737876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-07-12 13:30:58.737901 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-07-12 13:30:58.737912 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-07-12 13:30:58.737933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-07-12 13:30:58.737946 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-07-12 13:30:58.737959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-07-12 13:30:58.737971 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-07-12 13:30:58.738004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-07-12 13:30:58.738078 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-07-12 13:30:58.738091 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-07-12 13:30:58.738103 | orchestrator | 2025-07-12 13:30:58.738117 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-07-12 13:30:58.738129 | orchestrator | Saturday 12 July 2025 13:30:52 +0000 (0:00:06.166) 0:00:36.389 ********* 2025-07-12 13:30:58.738141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-07-12 13:30:58.738163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-07-12 13:30:58.738175 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-07-12 13:30:58.738189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-07-12 13:30:58.738201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-07-12 13:30:58.738214 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-07-12 13:30:58.738227 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-07-12 13:30:58.738240 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-07-12 13:30:58.738263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-07-12 13:30:58.738276 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-07-12 13:30:58.738287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-07-12 13:30:58.738298 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-07-12 13:30:58.738323 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-07-12 13:31:04.939201 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-07-12 13:31:04.939378 | orchestrator | 2025-07-12 13:31:04.939410 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-07-12 13:31:04.939432 | orchestrator | Saturday 12 July 2025 13:30:58 +0000 (0:00:06.001) 0:00:42.391 ********* 2025-07-12 13:31:04.939455 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:31:04.939475 | orchestrator | 2025-07-12 13:31:04.939494 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-07-12 13:31:04.939512 | orchestrator | Saturday 12 July 2025 13:30:59 +0000 (0:00:01.244) 0:00:43.635 ********* 2025-07-12 13:31:04.939559 | orchestrator | ok: [testbed-manager] 2025-07-12 13:31:04.939582 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:31:04.939600 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:31:04.939618 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:31:04.939637 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:31:04.939655 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:31:04.939673 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:31:04.939691 | orchestrator | 2025-07-12 13:31:04.939710 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-07-12 13:31:04.939731 | orchestrator | Saturday 12 July 2025 13:31:01 +0000 (0:00:01.156) 0:00:44.792 ********* 2025-07-12 13:31:04.939750 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 13:31:04.939770 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 13:31:04.939790 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 13:31:04.939808 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 13:31:04.939828 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 13:31:04.939848 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 13:31:04.939866 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 13:31:04.939885 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:31:04.939905 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 13:31:04.939924 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 13:31:04.939964 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 13:31:04.939984 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:31:04.940003 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 13:31:04.940022 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 13:31:04.940041 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 13:31:04.940059 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 13:31:04.940078 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 13:31:04.940096 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 13:31:04.940114 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:31:04.940132 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 13:31:04.940153 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 13:31:04.940171 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 13:31:04.940189 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 13:31:04.940207 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:31:04.940226 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 13:31:04.940261 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 13:31:04.940279 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 13:31:04.940297 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 13:31:04.940317 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:31:04.940335 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:31:04.940354 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 13:31:04.940372 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 13:31:04.940390 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 13:31:04.940408 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 13:31:04.940427 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:31:04.940446 | orchestrator | 2025-07-12 13:31:04.940464 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-07-12 13:31:04.940498 | orchestrator | Saturday 12 July 2025 13:31:03 +0000 (0:00:02.050) 0:00:46.842 ********* 2025-07-12 13:31:04.940511 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:31:04.940530 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:31:04.940591 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:31:04.940609 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:31:04.940628 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:31:04.940646 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:31:04.940664 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:31:04.940684 | orchestrator | 2025-07-12 13:31:04.940703 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-07-12 13:31:04.940722 | orchestrator | Saturday 12 July 2025 13:31:03 +0000 (0:00:00.637) 0:00:47.479 ********* 2025-07-12 13:31:04.940740 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:31:04.940758 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:31:04.940778 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:31:04.940797 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:31:04.940814 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:31:04.940832 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:31:04.940850 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:31:04.940868 | orchestrator | 2025-07-12 13:31:04.940886 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:31:04.940904 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 13:31:04.940924 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 13:31:04.940942 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 13:31:04.940959 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 13:31:04.940974 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 13:31:04.940989 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 13:31:04.941005 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 13:31:04.941022 | orchestrator | 2025-07-12 13:31:04.941037 | orchestrator | 2025-07-12 13:31:04.941054 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:31:04.941087 | orchestrator | Saturday 12 July 2025 13:31:04 +0000 (0:00:00.733) 0:00:48.213 ********* 2025-07-12 13:31:04.941117 | orchestrator | =============================================================================== 2025-07-12 13:31:04.941134 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.17s 2025-07-12 13:31:04.941152 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.00s 2025-07-12 13:31:04.941171 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.60s 2025-07-12 13:31:04.941187 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.59s 2025-07-12 13:31:04.941203 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.15s 2025-07-12 13:31:04.941219 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.05s 2025-07-12 13:31:04.941237 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.94s 2025-07-12 13:31:04.941256 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.93s 2025-07-12 13:31:04.941273 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.89s 2025-07-12 13:31:04.941292 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.68s 2025-07-12 13:31:04.941310 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.46s 2025-07-12 13:31:04.941326 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.28s 2025-07-12 13:31:04.941342 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.24s 2025-07-12 13:31:04.941359 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.24s 2025-07-12 13:31:04.941376 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.19s 2025-07-12 13:31:04.941393 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.17s 2025-07-12 13:31:04.941410 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.16s 2025-07-12 13:31:04.941427 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.03s 2025-07-12 13:31:04.941444 | orchestrator | osism.commons.network : Create required directories --------------------- 0.98s 2025-07-12 13:31:04.941462 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.93s 2025-07-12 13:31:05.234806 | orchestrator | + osism apply wireguard 2025-07-12 13:31:17.200256 | orchestrator | 2025-07-12 13:31:17 | INFO  | Task a7d49ecb-4957-4c24-ad63-2208289dffd4 (wireguard) was prepared for execution. 2025-07-12 13:31:17.200372 | orchestrator | 2025-07-12 13:31:17 | INFO  | It takes a moment until task a7d49ecb-4957-4c24-ad63-2208289dffd4 (wireguard) has been started and output is visible here. 2025-07-12 13:31:36.894692 | orchestrator | 2025-07-12 13:31:36.894808 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-07-12 13:31:36.894823 | orchestrator | 2025-07-12 13:31:36.894834 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-07-12 13:31:36.894844 | orchestrator | Saturday 12 July 2025 13:31:21 +0000 (0:00:00.235) 0:00:00.235 ********* 2025-07-12 13:31:36.894854 | orchestrator | ok: [testbed-manager] 2025-07-12 13:31:36.894865 | orchestrator | 2025-07-12 13:31:36.894875 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-07-12 13:31:36.894884 | orchestrator | Saturday 12 July 2025 13:31:22 +0000 (0:00:01.570) 0:00:01.805 ********* 2025-07-12 13:31:36.894894 | orchestrator | changed: [testbed-manager] 2025-07-12 13:31:36.894904 | orchestrator | 2025-07-12 13:31:36.894913 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-07-12 13:31:36.894923 | orchestrator | Saturday 12 July 2025 13:31:29 +0000 (0:00:06.316) 0:00:08.122 ********* 2025-07-12 13:31:36.894932 | orchestrator | changed: [testbed-manager] 2025-07-12 13:31:36.894942 | orchestrator | 2025-07-12 13:31:36.894951 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-07-12 13:31:36.894985 | orchestrator | Saturday 12 July 2025 13:31:29 +0000 (0:00:00.565) 0:00:08.687 ********* 2025-07-12 13:31:36.894995 | orchestrator | changed: [testbed-manager] 2025-07-12 13:31:36.895004 | orchestrator | 2025-07-12 13:31:36.895014 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-07-12 13:31:36.895024 | orchestrator | Saturday 12 July 2025 13:31:30 +0000 (0:00:00.426) 0:00:09.113 ********* 2025-07-12 13:31:36.895033 | orchestrator | ok: [testbed-manager] 2025-07-12 13:31:36.895043 | orchestrator | 2025-07-12 13:31:36.895052 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-07-12 13:31:36.895062 | orchestrator | Saturday 12 July 2025 13:31:30 +0000 (0:00:00.546) 0:00:09.660 ********* 2025-07-12 13:31:36.895071 | orchestrator | ok: [testbed-manager] 2025-07-12 13:31:36.895080 | orchestrator | 2025-07-12 13:31:36.895090 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-07-12 13:31:36.895099 | orchestrator | Saturday 12 July 2025 13:31:31 +0000 (0:00:00.542) 0:00:10.203 ********* 2025-07-12 13:31:36.895108 | orchestrator | ok: [testbed-manager] 2025-07-12 13:31:36.895118 | orchestrator | 2025-07-12 13:31:36.895127 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-07-12 13:31:36.895136 | orchestrator | Saturday 12 July 2025 13:31:31 +0000 (0:00:00.456) 0:00:10.660 ********* 2025-07-12 13:31:36.895145 | orchestrator | changed: [testbed-manager] 2025-07-12 13:31:36.895155 | orchestrator | 2025-07-12 13:31:36.895164 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-07-12 13:31:36.895173 | orchestrator | Saturday 12 July 2025 13:31:32 +0000 (0:00:01.221) 0:00:11.881 ********* 2025-07-12 13:31:36.895184 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-12 13:31:36.895196 | orchestrator | changed: [testbed-manager] 2025-07-12 13:31:36.895207 | orchestrator | 2025-07-12 13:31:36.895218 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-07-12 13:31:36.895229 | orchestrator | Saturday 12 July 2025 13:31:33 +0000 (0:00:00.946) 0:00:12.828 ********* 2025-07-12 13:31:36.895254 | orchestrator | changed: [testbed-manager] 2025-07-12 13:31:36.895264 | orchestrator | 2025-07-12 13:31:36.895275 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-07-12 13:31:36.895286 | orchestrator | Saturday 12 July 2025 13:31:35 +0000 (0:00:01.673) 0:00:14.502 ********* 2025-07-12 13:31:36.895297 | orchestrator | changed: [testbed-manager] 2025-07-12 13:31:36.895307 | orchestrator | 2025-07-12 13:31:36.895318 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:31:36.895329 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:31:36.895340 | orchestrator | 2025-07-12 13:31:36.895351 | orchestrator | 2025-07-12 13:31:36.895362 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:31:36.895373 | orchestrator | Saturday 12 July 2025 13:31:36 +0000 (0:00:00.971) 0:00:15.473 ********* 2025-07-12 13:31:36.895384 | orchestrator | =============================================================================== 2025-07-12 13:31:36.895395 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.32s 2025-07-12 13:31:36.895405 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.67s 2025-07-12 13:31:36.895416 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.57s 2025-07-12 13:31:36.895426 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.22s 2025-07-12 13:31:36.895437 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.97s 2025-07-12 13:31:36.895448 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.95s 2025-07-12 13:31:36.895459 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.57s 2025-07-12 13:31:36.895469 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.55s 2025-07-12 13:31:36.895488 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.54s 2025-07-12 13:31:36.895499 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.46s 2025-07-12 13:31:36.895510 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.43s 2025-07-12 13:31:37.167827 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-07-12 13:31:37.196097 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-07-12 13:31:37.196146 | orchestrator | Dload Upload Total Spent Left Speed 2025-07-12 13:31:37.275435 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 188 0 --:--:-- --:--:-- --:--:-- 187 2025-07-12 13:31:37.288906 | orchestrator | + osism apply --environment custom workarounds 2025-07-12 13:31:39.106913 | orchestrator | 2025-07-12 13:31:39 | INFO  | Trying to run play workarounds in environment custom 2025-07-12 13:31:49.259271 | orchestrator | 2025-07-12 13:31:49 | INFO  | Task ee2612e0-9032-42b7-b71b-42381353ba83 (workarounds) was prepared for execution. 2025-07-12 13:31:49.259383 | orchestrator | 2025-07-12 13:31:49 | INFO  | It takes a moment until task ee2612e0-9032-42b7-b71b-42381353ba83 (workarounds) has been started and output is visible here. 2025-07-12 13:32:14.014789 | orchestrator | 2025-07-12 13:32:14.014880 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 13:32:14.014891 | orchestrator | 2025-07-12 13:32:14.014898 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-07-12 13:32:14.014906 | orchestrator | Saturday 12 July 2025 13:31:53 +0000 (0:00:00.146) 0:00:00.146 ********* 2025-07-12 13:32:14.014913 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-07-12 13:32:14.014920 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-07-12 13:32:14.014926 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-07-12 13:32:14.014933 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-07-12 13:32:14.014939 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-07-12 13:32:14.014945 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-07-12 13:32:14.014952 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-07-12 13:32:14.014958 | orchestrator | 2025-07-12 13:32:14.014965 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-07-12 13:32:14.014971 | orchestrator | 2025-07-12 13:32:14.014977 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-07-12 13:32:14.014984 | orchestrator | Saturday 12 July 2025 13:31:53 +0000 (0:00:00.772) 0:00:00.919 ********* 2025-07-12 13:32:14.014990 | orchestrator | ok: [testbed-manager] 2025-07-12 13:32:14.014999 | orchestrator | 2025-07-12 13:32:14.015005 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-07-12 13:32:14.015011 | orchestrator | 2025-07-12 13:32:14.015018 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-07-12 13:32:14.015025 | orchestrator | Saturday 12 July 2025 13:31:56 +0000 (0:00:02.315) 0:00:03.235 ********* 2025-07-12 13:32:14.015031 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:32:14.015038 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:32:14.015044 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:32:14.015050 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:32:14.015057 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:32:14.015063 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:32:14.015069 | orchestrator | 2025-07-12 13:32:14.015076 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-07-12 13:32:14.015083 | orchestrator | 2025-07-12 13:32:14.015101 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-07-12 13:32:14.015125 | orchestrator | Saturday 12 July 2025 13:31:58 +0000 (0:00:01.851) 0:00:05.086 ********* 2025-07-12 13:32:14.015133 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-12 13:32:14.015141 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-12 13:32:14.015147 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-12 13:32:14.015154 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-12 13:32:14.015160 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-12 13:32:14.015166 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-12 13:32:14.015173 | orchestrator | 2025-07-12 13:32:14.015179 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-07-12 13:32:14.015186 | orchestrator | Saturday 12 July 2025 13:31:59 +0000 (0:00:01.449) 0:00:06.536 ********* 2025-07-12 13:32:14.015192 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:32:14.015199 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:32:14.015205 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:32:14.015211 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:32:14.015218 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:32:14.015224 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:32:14.015230 | orchestrator | 2025-07-12 13:32:14.015237 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-07-12 13:32:14.015243 | orchestrator | Saturday 12 July 2025 13:32:03 +0000 (0:00:03.789) 0:00:10.325 ********* 2025-07-12 13:32:14.015250 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:32:14.015256 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:32:14.015262 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:32:14.015269 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:32:14.015275 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:32:14.015281 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:32:14.015288 | orchestrator | 2025-07-12 13:32:14.015294 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-07-12 13:32:14.015300 | orchestrator | 2025-07-12 13:32:14.015307 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-07-12 13:32:14.015314 | orchestrator | Saturday 12 July 2025 13:32:04 +0000 (0:00:00.710) 0:00:11.036 ********* 2025-07-12 13:32:14.015320 | orchestrator | changed: [testbed-manager] 2025-07-12 13:32:14.015327 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:32:14.015333 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:32:14.015340 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:32:14.015346 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:32:14.015352 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:32:14.015359 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:32:14.015366 | orchestrator | 2025-07-12 13:32:14.015373 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-07-12 13:32:14.015381 | orchestrator | Saturday 12 July 2025 13:32:05 +0000 (0:00:01.737) 0:00:12.774 ********* 2025-07-12 13:32:14.015388 | orchestrator | changed: [testbed-manager] 2025-07-12 13:32:14.015396 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:32:14.015403 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:32:14.015410 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:32:14.015418 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:32:14.015425 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:32:14.015446 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:32:14.015455 | orchestrator | 2025-07-12 13:32:14.015463 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-07-12 13:32:14.015471 | orchestrator | Saturday 12 July 2025 13:32:07 +0000 (0:00:01.619) 0:00:14.393 ********* 2025-07-12 13:32:14.015478 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:32:14.015491 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:32:14.015497 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:32:14.015504 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:32:14.015510 | orchestrator | ok: [testbed-manager] 2025-07-12 13:32:14.015517 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:32:14.015523 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:32:14.015530 | orchestrator | 2025-07-12 13:32:14.015536 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-07-12 13:32:14.015543 | orchestrator | Saturday 12 July 2025 13:32:08 +0000 (0:00:01.488) 0:00:15.881 ********* 2025-07-12 13:32:14.015549 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:32:14.015592 | orchestrator | changed: [testbed-manager] 2025-07-12 13:32:14.015598 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:32:14.015604 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:32:14.015611 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:32:14.015617 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:32:14.015623 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:32:14.015629 | orchestrator | 2025-07-12 13:32:14.015635 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-07-12 13:32:14.015641 | orchestrator | Saturday 12 July 2025 13:32:10 +0000 (0:00:01.841) 0:00:17.722 ********* 2025-07-12 13:32:14.015647 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:32:14.015653 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:32:14.015659 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:32:14.015665 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:32:14.015671 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:32:14.015677 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:32:14.015683 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:32:14.015689 | orchestrator | 2025-07-12 13:32:14.015695 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-07-12 13:32:14.015702 | orchestrator | 2025-07-12 13:32:14.015708 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-07-12 13:32:14.015714 | orchestrator | Saturday 12 July 2025 13:32:11 +0000 (0:00:00.611) 0:00:18.333 ********* 2025-07-12 13:32:14.015720 | orchestrator | ok: [testbed-manager] 2025-07-12 13:32:14.015726 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:32:14.015737 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:32:14.015743 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:32:14.015749 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:32:14.015755 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:32:14.015761 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:32:14.015767 | orchestrator | 2025-07-12 13:32:14.015773 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:32:14.015781 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 13:32:14.015788 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:32:14.015794 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:32:14.015801 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:32:14.015807 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:32:14.015813 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:32:14.015819 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:32:14.015830 | orchestrator | 2025-07-12 13:32:14.015837 | orchestrator | 2025-07-12 13:32:14.015843 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:32:14.015849 | orchestrator | Saturday 12 July 2025 13:32:13 +0000 (0:00:02.629) 0:00:20.963 ********* 2025-07-12 13:32:14.015855 | orchestrator | =============================================================================== 2025-07-12 13:32:14.015861 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.79s 2025-07-12 13:32:14.015867 | orchestrator | Install python3-docker -------------------------------------------------- 2.63s 2025-07-12 13:32:14.015873 | orchestrator | Apply netplan configuration --------------------------------------------- 2.32s 2025-07-12 13:32:14.015880 | orchestrator | Apply netplan configuration --------------------------------------------- 1.85s 2025-07-12 13:32:14.015886 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.84s 2025-07-12 13:32:14.015892 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.74s 2025-07-12 13:32:14.015898 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.62s 2025-07-12 13:32:14.015904 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.49s 2025-07-12 13:32:14.015910 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.45s 2025-07-12 13:32:14.015916 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.77s 2025-07-12 13:32:14.015922 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.71s 2025-07-12 13:32:14.015932 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.61s 2025-07-12 13:32:14.636188 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-07-12 13:32:26.605777 | orchestrator | 2025-07-12 13:32:26 | INFO  | Task b3927b7b-c97e-442f-8a76-fff65b76ac0f (reboot) was prepared for execution. 2025-07-12 13:32:26.605899 | orchestrator | 2025-07-12 13:32:26 | INFO  | It takes a moment until task b3927b7b-c97e-442f-8a76-fff65b76ac0f (reboot) has been started and output is visible here. 2025-07-12 13:32:37.121021 | orchestrator | 2025-07-12 13:32:37.121171 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-12 13:32:37.121198 | orchestrator | 2025-07-12 13:32:37.121217 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-12 13:32:37.121237 | orchestrator | Saturday 12 July 2025 13:32:30 +0000 (0:00:00.235) 0:00:00.235 ********* 2025-07-12 13:32:37.121255 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:32:37.121275 | orchestrator | 2025-07-12 13:32:37.121294 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-12 13:32:37.121312 | orchestrator | Saturday 12 July 2025 13:32:31 +0000 (0:00:00.118) 0:00:00.353 ********* 2025-07-12 13:32:37.121329 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:32:37.121347 | orchestrator | 2025-07-12 13:32:37.121365 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-12 13:32:37.121384 | orchestrator | Saturday 12 July 2025 13:32:32 +0000 (0:00:00.979) 0:00:01.332 ********* 2025-07-12 13:32:37.121402 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:32:37.121421 | orchestrator | 2025-07-12 13:32:37.121439 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-12 13:32:37.121457 | orchestrator | 2025-07-12 13:32:37.121475 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-12 13:32:37.121494 | orchestrator | Saturday 12 July 2025 13:32:32 +0000 (0:00:00.133) 0:00:01.466 ********* 2025-07-12 13:32:37.121511 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:32:37.121529 | orchestrator | 2025-07-12 13:32:37.121548 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-12 13:32:37.121602 | orchestrator | Saturday 12 July 2025 13:32:32 +0000 (0:00:00.114) 0:00:01.581 ********* 2025-07-12 13:32:37.121623 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:32:37.121642 | orchestrator | 2025-07-12 13:32:37.121728 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-12 13:32:37.121747 | orchestrator | Saturday 12 July 2025 13:32:32 +0000 (0:00:00.663) 0:00:02.245 ********* 2025-07-12 13:32:37.121766 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:32:37.121785 | orchestrator | 2025-07-12 13:32:37.121804 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-12 13:32:37.121823 | orchestrator | 2025-07-12 13:32:37.121842 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-12 13:32:37.121860 | orchestrator | Saturday 12 July 2025 13:32:33 +0000 (0:00:00.124) 0:00:02.369 ********* 2025-07-12 13:32:37.121880 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:32:37.121900 | orchestrator | 2025-07-12 13:32:37.121920 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-12 13:32:37.121938 | orchestrator | Saturday 12 July 2025 13:32:33 +0000 (0:00:00.259) 0:00:02.629 ********* 2025-07-12 13:32:37.121957 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:32:37.121975 | orchestrator | 2025-07-12 13:32:37.121993 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-12 13:32:37.122013 | orchestrator | Saturday 12 July 2025 13:32:33 +0000 (0:00:00.662) 0:00:03.292 ********* 2025-07-12 13:32:37.122113 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:32:37.122125 | orchestrator | 2025-07-12 13:32:37.122173 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-12 13:32:37.122184 | orchestrator | 2025-07-12 13:32:37.122195 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-12 13:32:37.122211 | orchestrator | Saturday 12 July 2025 13:32:34 +0000 (0:00:00.126) 0:00:03.418 ********* 2025-07-12 13:32:37.122222 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:32:37.122232 | orchestrator | 2025-07-12 13:32:37.122243 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-12 13:32:37.122254 | orchestrator | Saturday 12 July 2025 13:32:34 +0000 (0:00:00.111) 0:00:03.530 ********* 2025-07-12 13:32:37.122265 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:32:37.122275 | orchestrator | 2025-07-12 13:32:37.122285 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-12 13:32:37.122296 | orchestrator | Saturday 12 July 2025 13:32:34 +0000 (0:00:00.687) 0:00:04.218 ********* 2025-07-12 13:32:37.122307 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:32:37.122317 | orchestrator | 2025-07-12 13:32:37.122327 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-12 13:32:37.122338 | orchestrator | 2025-07-12 13:32:37.122349 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-12 13:32:37.122359 | orchestrator | Saturday 12 July 2025 13:32:35 +0000 (0:00:00.120) 0:00:04.338 ********* 2025-07-12 13:32:37.122370 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:32:37.122380 | orchestrator | 2025-07-12 13:32:37.122390 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-12 13:32:37.122401 | orchestrator | Saturday 12 July 2025 13:32:35 +0000 (0:00:00.113) 0:00:04.451 ********* 2025-07-12 13:32:37.122412 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:32:37.122422 | orchestrator | 2025-07-12 13:32:37.122432 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-12 13:32:37.122443 | orchestrator | Saturday 12 July 2025 13:32:35 +0000 (0:00:00.684) 0:00:05.135 ********* 2025-07-12 13:32:37.122454 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:32:37.122464 | orchestrator | 2025-07-12 13:32:37.122475 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-12 13:32:37.122485 | orchestrator | 2025-07-12 13:32:37.122496 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-12 13:32:37.122506 | orchestrator | Saturday 12 July 2025 13:32:35 +0000 (0:00:00.116) 0:00:05.252 ********* 2025-07-12 13:32:37.122517 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:32:37.122527 | orchestrator | 2025-07-12 13:32:37.122537 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-12 13:32:37.122590 | orchestrator | Saturday 12 July 2025 13:32:36 +0000 (0:00:00.107) 0:00:05.359 ********* 2025-07-12 13:32:37.122606 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:32:37.122617 | orchestrator | 2025-07-12 13:32:37.122628 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-12 13:32:37.122639 | orchestrator | Saturday 12 July 2025 13:32:36 +0000 (0:00:00.669) 0:00:06.028 ********* 2025-07-12 13:32:37.122674 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:32:37.122685 | orchestrator | 2025-07-12 13:32:37.122696 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:32:37.122727 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:32:37.122741 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:32:37.122752 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:32:37.122763 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:32:37.122773 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:32:37.122784 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:32:37.122794 | orchestrator | 2025-07-12 13:32:37.122805 | orchestrator | 2025-07-12 13:32:37.122816 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:32:37.122832 | orchestrator | Saturday 12 July 2025 13:32:36 +0000 (0:00:00.041) 0:00:06.070 ********* 2025-07-12 13:32:37.122843 | orchestrator | =============================================================================== 2025-07-12 13:32:37.122854 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.35s 2025-07-12 13:32:37.122864 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.82s 2025-07-12 13:32:37.122875 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.66s 2025-07-12 13:32:37.386338 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-07-12 13:32:49.349328 | orchestrator | 2025-07-12 13:32:49 | INFO  | Task acd7f61c-5233-40ed-beb7-8d6f2c44745c (wait-for-connection) was prepared for execution. 2025-07-12 13:32:49.349447 | orchestrator | 2025-07-12 13:32:49 | INFO  | It takes a moment until task acd7f61c-5233-40ed-beb7-8d6f2c44745c (wait-for-connection) has been started and output is visible here. 2025-07-12 13:33:05.247451 | orchestrator | 2025-07-12 13:33:05.247621 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-07-12 13:33:05.247640 | orchestrator | 2025-07-12 13:33:05.247652 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-07-12 13:33:05.247664 | orchestrator | Saturday 12 July 2025 13:32:53 +0000 (0:00:00.249) 0:00:00.249 ********* 2025-07-12 13:33:05.247675 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:33:05.247687 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:33:05.247698 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:33:05.247709 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:33:05.247720 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:33:05.247730 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:33:05.247741 | orchestrator | 2025-07-12 13:33:05.247752 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:33:05.247763 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:33:05.247807 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:33:05.247819 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:33:05.247830 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:33:05.247841 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:33:05.247852 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:33:05.247862 | orchestrator | 2025-07-12 13:33:05.247873 | orchestrator | 2025-07-12 13:33:05.247884 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:33:05.247894 | orchestrator | Saturday 12 July 2025 13:33:04 +0000 (0:00:11.537) 0:00:11.787 ********* 2025-07-12 13:33:05.247905 | orchestrator | =============================================================================== 2025-07-12 13:33:05.247916 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.54s 2025-07-12 13:33:05.521263 | orchestrator | + osism apply hddtemp 2025-07-12 13:33:17.437305 | orchestrator | 2025-07-12 13:33:17 | INFO  | Task 997c6d1a-8aa1-4575-9248-a08343af568b (hddtemp) was prepared for execution. 2025-07-12 13:33:17.437416 | orchestrator | 2025-07-12 13:33:17 | INFO  | It takes a moment until task 997c6d1a-8aa1-4575-9248-a08343af568b (hddtemp) has been started and output is visible here. 2025-07-12 13:33:44.633179 | orchestrator | 2025-07-12 13:33:44.633303 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-07-12 13:33:44.633321 | orchestrator | 2025-07-12 13:33:44.633333 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-07-12 13:33:44.633345 | orchestrator | Saturday 12 July 2025 13:33:21 +0000 (0:00:00.262) 0:00:00.262 ********* 2025-07-12 13:33:44.633356 | orchestrator | ok: [testbed-manager] 2025-07-12 13:33:44.633368 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:33:44.633379 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:33:44.633390 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:33:44.633401 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:33:44.633411 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:33:44.633422 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:33:44.633433 | orchestrator | 2025-07-12 13:33:44.633444 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-07-12 13:33:44.633454 | orchestrator | Saturday 12 July 2025 13:33:22 +0000 (0:00:00.706) 0:00:00.968 ********* 2025-07-12 13:33:44.633468 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:33:44.633481 | orchestrator | 2025-07-12 13:33:44.633492 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-07-12 13:33:44.633502 | orchestrator | Saturday 12 July 2025 13:33:23 +0000 (0:00:01.209) 0:00:02.178 ********* 2025-07-12 13:33:44.633513 | orchestrator | ok: [testbed-manager] 2025-07-12 13:33:44.633524 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:33:44.633535 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:33:44.633545 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:33:44.633556 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:33:44.633566 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:33:44.633577 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:33:44.633636 | orchestrator | 2025-07-12 13:33:44.633666 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-07-12 13:33:44.633677 | orchestrator | Saturday 12 July 2025 13:33:25 +0000 (0:00:01.960) 0:00:04.139 ********* 2025-07-12 13:33:44.633688 | orchestrator | changed: [testbed-manager] 2025-07-12 13:33:44.633725 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:33:44.633738 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:33:44.633751 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:33:44.633763 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:33:44.633775 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:33:44.633788 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:33:44.633800 | orchestrator | 2025-07-12 13:33:44.633812 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-07-12 13:33:44.633822 | orchestrator | Saturday 12 July 2025 13:33:26 +0000 (0:00:01.179) 0:00:05.318 ********* 2025-07-12 13:33:44.633833 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:33:44.633844 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:33:44.633854 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:33:44.633865 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:33:44.633875 | orchestrator | ok: [testbed-manager] 2025-07-12 13:33:44.633886 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:33:44.633896 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:33:44.633906 | orchestrator | 2025-07-12 13:33:44.633917 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-07-12 13:33:44.633928 | orchestrator | Saturday 12 July 2025 13:33:27 +0000 (0:00:01.192) 0:00:06.510 ********* 2025-07-12 13:33:44.633938 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:33:44.633949 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:33:44.633959 | orchestrator | changed: [testbed-manager] 2025-07-12 13:33:44.633970 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:33:44.633980 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:33:44.633991 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:33:44.634001 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:33:44.634012 | orchestrator | 2025-07-12 13:33:44.634112 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-07-12 13:33:44.634125 | orchestrator | Saturday 12 July 2025 13:33:28 +0000 (0:00:00.849) 0:00:07.360 ********* 2025-07-12 13:33:44.634145 | orchestrator | changed: [testbed-manager] 2025-07-12 13:33:44.634164 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:33:44.634184 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:33:44.634196 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:33:44.634206 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:33:44.634217 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:33:44.634227 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:33:44.634237 | orchestrator | 2025-07-12 13:33:44.634248 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-07-12 13:33:44.634273 | orchestrator | Saturday 12 July 2025 13:33:40 +0000 (0:00:12.257) 0:00:19.617 ********* 2025-07-12 13:33:44.634284 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:33:44.634296 | orchestrator | 2025-07-12 13:33:44.634306 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-07-12 13:33:44.634317 | orchestrator | Saturday 12 July 2025 13:33:42 +0000 (0:00:01.404) 0:00:21.022 ********* 2025-07-12 13:33:44.634328 | orchestrator | changed: [testbed-manager] 2025-07-12 13:33:44.634338 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:33:44.634349 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:33:44.634359 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:33:44.634369 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:33:44.634380 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:33:44.634390 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:33:44.634400 | orchestrator | 2025-07-12 13:33:44.634411 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:33:44.634422 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:33:44.634463 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 13:33:44.634476 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 13:33:44.634487 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 13:33:44.634498 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 13:33:44.634508 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 13:33:44.634519 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 13:33:44.634530 | orchestrator | 2025-07-12 13:33:44.634540 | orchestrator | 2025-07-12 13:33:44.634551 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:33:44.634562 | orchestrator | Saturday 12 July 2025 13:33:44 +0000 (0:00:01.900) 0:00:22.923 ********* 2025-07-12 13:33:44.634573 | orchestrator | =============================================================================== 2025-07-12 13:33:44.634583 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.26s 2025-07-12 13:33:44.634624 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.96s 2025-07-12 13:33:44.634636 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.90s 2025-07-12 13:33:44.634646 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.40s 2025-07-12 13:33:44.634657 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.21s 2025-07-12 13:33:44.634667 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.19s 2025-07-12 13:33:44.634678 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.18s 2025-07-12 13:33:44.634688 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.85s 2025-07-12 13:33:44.634699 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.71s 2025-07-12 13:33:44.926105 | orchestrator | ++ semver 9.2.0 7.1.1 2025-07-12 13:33:44.984574 | orchestrator | + [[ 1 -ge 0 ]] 2025-07-12 13:33:44.984676 | orchestrator | + sudo systemctl restart manager.service 2025-07-12 13:33:58.708282 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-07-12 13:33:58.708397 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-07-12 13:33:58.708415 | orchestrator | + local max_attempts=60 2025-07-12 13:33:58.708430 | orchestrator | + local name=ceph-ansible 2025-07-12 13:33:58.708441 | orchestrator | + local attempt_num=1 2025-07-12 13:33:58.708453 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 13:33:58.737325 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 13:33:58.737384 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 13:33:58.737396 | orchestrator | + sleep 5 2025-07-12 13:34:03.744530 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 13:34:03.787530 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 13:34:03.787662 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 13:34:03.787678 | orchestrator | + sleep 5 2025-07-12 13:34:08.791391 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 13:34:08.825804 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 13:34:08.825888 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 13:34:08.825903 | orchestrator | + sleep 5 2025-07-12 13:34:13.830929 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 13:34:13.869136 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 13:34:13.869201 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 13:34:13.869215 | orchestrator | + sleep 5 2025-07-12 13:34:18.873969 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 13:34:18.920500 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 13:34:18.920569 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 13:34:18.920583 | orchestrator | + sleep 5 2025-07-12 13:34:23.925174 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 13:34:23.960819 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 13:34:23.960905 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 13:34:23.960918 | orchestrator | + sleep 5 2025-07-12 13:34:28.965853 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 13:34:29.007354 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 13:34:29.007421 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 13:34:29.007434 | orchestrator | + sleep 5 2025-07-12 13:34:34.016436 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 13:34:34.040209 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-12 13:34:34.040272 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 13:34:34.040286 | orchestrator | + sleep 5 2025-07-12 13:34:39.045804 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 13:34:39.079643 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-12 13:34:39.079730 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 13:34:39.079745 | orchestrator | + sleep 5 2025-07-12 13:34:44.083804 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 13:34:44.120701 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-12 13:34:44.120796 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 13:34:44.120810 | orchestrator | + sleep 5 2025-07-12 13:34:49.125404 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 13:34:49.166189 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-12 13:34:49.166264 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 13:34:49.166278 | orchestrator | + sleep 5 2025-07-12 13:34:54.171239 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 13:34:54.218449 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-12 13:34:54.218540 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 13:34:54.218555 | orchestrator | + sleep 5 2025-07-12 13:34:59.224109 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 13:34:59.255584 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-12 13:34:59.255687 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 13:34:59.255703 | orchestrator | + sleep 5 2025-07-12 13:35:04.260280 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 13:35:04.296806 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-12 13:35:04.296904 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-07-12 13:35:04.296921 | orchestrator | + local max_attempts=60 2025-07-12 13:35:04.296934 | orchestrator | + local name=kolla-ansible 2025-07-12 13:35:04.296946 | orchestrator | + local attempt_num=1 2025-07-12 13:35:04.297316 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-07-12 13:35:04.341460 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-12 13:35:04.341537 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-07-12 13:35:04.341550 | orchestrator | + local max_attempts=60 2025-07-12 13:35:04.341562 | orchestrator | + local name=osism-ansible 2025-07-12 13:35:04.341573 | orchestrator | + local attempt_num=1 2025-07-12 13:35:04.342161 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-07-12 13:35:04.379007 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-12 13:35:04.379081 | orchestrator | + [[ true == \t\r\u\e ]] 2025-07-12 13:35:04.379095 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-07-12 13:35:04.544551 | orchestrator | ARA in ceph-ansible already disabled. 2025-07-12 13:35:04.687292 | orchestrator | ARA in kolla-ansible already disabled. 2025-07-12 13:35:04.834247 | orchestrator | ARA in osism-ansible already disabled. 2025-07-12 13:35:04.992240 | orchestrator | ARA in osism-kubernetes already disabled. 2025-07-12 13:35:04.993816 | orchestrator | + osism apply gather-facts 2025-07-12 13:35:16.822610 | orchestrator | 2025-07-12 13:35:16 | INFO  | Task fb36358a-fa9d-45dd-a0f6-33ed9e722437 (gather-facts) was prepared for execution. 2025-07-12 13:35:16.822792 | orchestrator | 2025-07-12 13:35:16 | INFO  | It takes a moment until task fb36358a-fa9d-45dd-a0f6-33ed9e722437 (gather-facts) has been started and output is visible here. 2025-07-12 13:35:30.054932 | orchestrator | 2025-07-12 13:35:30.055051 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-12 13:35:30.055068 | orchestrator | 2025-07-12 13:35:30.055098 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 13:35:30.055110 | orchestrator | Saturday 12 July 2025 13:35:20 +0000 (0:00:00.224) 0:00:00.224 ********* 2025-07-12 13:35:30.055121 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:35:30.055133 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:35:30.055144 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:35:30.055155 | orchestrator | ok: [testbed-manager] 2025-07-12 13:35:30.055166 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:35:30.055177 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:35:30.055188 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:35:30.055198 | orchestrator | 2025-07-12 13:35:30.055209 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-12 13:35:30.055221 | orchestrator | 2025-07-12 13:35:30.055232 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-12 13:35:30.055243 | orchestrator | Saturday 12 July 2025 13:35:29 +0000 (0:00:08.270) 0:00:08.494 ********* 2025-07-12 13:35:30.055254 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:35:30.055266 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:35:30.055277 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:35:30.055288 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:35:30.055298 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:35:30.055309 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:35:30.055320 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:35:30.055330 | orchestrator | 2025-07-12 13:35:30.055341 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:35:30.055352 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 13:35:30.055365 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 13:35:30.055376 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 13:35:30.055387 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 13:35:30.055398 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 13:35:30.055409 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 13:35:30.055419 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 13:35:30.055430 | orchestrator | 2025-07-12 13:35:30.055441 | orchestrator | 2025-07-12 13:35:30.055451 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:35:30.055462 | orchestrator | Saturday 12 July 2025 13:35:29 +0000 (0:00:00.502) 0:00:08.997 ********* 2025-07-12 13:35:30.055473 | orchestrator | =============================================================================== 2025-07-12 13:35:30.055486 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.27s 2025-07-12 13:35:30.055499 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2025-07-12 13:35:30.344389 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-07-12 13:35:30.356026 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-07-12 13:35:30.367070 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-07-12 13:35:30.377059 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-07-12 13:35:30.387787 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-07-12 13:35:30.398121 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-07-12 13:35:30.407279 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-07-12 13:35:30.422909 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-07-12 13:35:30.437258 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-07-12 13:35:30.450364 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-07-12 13:35:30.460013 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-07-12 13:35:30.468895 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-07-12 13:35:30.478711 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-07-12 13:35:30.489682 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-07-12 13:35:30.499970 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-07-12 13:35:30.511912 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-07-12 13:35:30.533454 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-07-12 13:35:30.548930 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-07-12 13:35:30.561382 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-07-12 13:35:30.572341 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-07-12 13:35:30.589422 | orchestrator | + [[ false == \t\r\u\e ]] 2025-07-12 13:35:30.975879 | orchestrator | ok: Runtime: 0:22:58.403077 2025-07-12 13:35:31.084427 | 2025-07-12 13:35:31.084601 | TASK [Deploy services] 2025-07-12 13:35:31.617612 | orchestrator | skipping: Conditional result was False 2025-07-12 13:35:31.635527 | 2025-07-12 13:35:31.635711 | TASK [Deploy in a nutshell] 2025-07-12 13:35:32.327486 | orchestrator | 2025-07-12 13:35:32.327689 | orchestrator | # PULL IMAGES 2025-07-12 13:35:32.327725 | orchestrator | 2025-07-12 13:35:32.327742 | orchestrator | + set -e 2025-07-12 13:35:32.327759 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-12 13:35:32.327780 | orchestrator | ++ export INTERACTIVE=false 2025-07-12 13:35:32.327793 | orchestrator | ++ INTERACTIVE=false 2025-07-12 13:35:32.327838 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-12 13:35:32.327859 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-12 13:35:32.327874 | orchestrator | + source /opt/manager-vars.sh 2025-07-12 13:35:32.327886 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-12 13:35:32.327904 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-12 13:35:32.327915 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-12 13:35:32.327932 | orchestrator | ++ CEPH_VERSION=reef 2025-07-12 13:35:32.327943 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-12 13:35:32.327961 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-12 13:35:32.327972 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-07-12 13:35:32.327986 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-07-12 13:35:32.327997 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-12 13:35:32.328009 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-12 13:35:32.328019 | orchestrator | ++ export ARA=false 2025-07-12 13:35:32.328030 | orchestrator | ++ ARA=false 2025-07-12 13:35:32.328041 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-12 13:35:32.328052 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-12 13:35:32.328062 | orchestrator | ++ export TEMPEST=false 2025-07-12 13:35:32.328073 | orchestrator | ++ TEMPEST=false 2025-07-12 13:35:32.328083 | orchestrator | ++ export IS_ZUUL=true 2025-07-12 13:35:32.328094 | orchestrator | ++ IS_ZUUL=true 2025-07-12 13:35:32.328105 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2025-07-12 13:35:32.328117 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2025-07-12 13:35:32.328128 | orchestrator | ++ export EXTERNAL_API=false 2025-07-12 13:35:32.328138 | orchestrator | ++ EXTERNAL_API=false 2025-07-12 13:35:32.328149 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-12 13:35:32.328160 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-12 13:35:32.328171 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-12 13:35:32.328181 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-12 13:35:32.328192 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-12 13:35:32.328209 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-12 13:35:32.328221 | orchestrator | + echo 2025-07-12 13:35:32.328232 | orchestrator | + echo '# PULL IMAGES' 2025-07-12 13:35:32.328243 | orchestrator | + echo 2025-07-12 13:35:32.328264 | orchestrator | ++ semver 9.2.0 7.0.0 2025-07-12 13:35:32.385249 | orchestrator | + [[ 1 -ge 0 ]] 2025-07-12 13:35:32.385319 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-07-12 13:35:34.177367 | orchestrator | 2025-07-12 13:35:34 | INFO  | Trying to run play pull-images in environment custom 2025-07-12 13:35:44.292198 | orchestrator | 2025-07-12 13:35:44 | INFO  | Task 7380fe43-64a5-49b7-8164-fe0d0380f08d (pull-images) was prepared for execution. 2025-07-12 13:35:44.292326 | orchestrator | 2025-07-12 13:35:44 | INFO  | It takes a moment until task 7380fe43-64a5-49b7-8164-fe0d0380f08d (pull-images) has been started and output is visible here. 2025-07-12 13:37:54.752899 | orchestrator | 2025-07-12 13:37:54.753032 | orchestrator | PLAY [Pull images] ************************************************************* 2025-07-12 13:37:54.753050 | orchestrator | 2025-07-12 13:37:54.753063 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-07-12 13:37:54.753087 | orchestrator | Saturday 12 July 2025 13:35:48 +0000 (0:00:00.197) 0:00:00.197 ********* 2025-07-12 13:37:54.753098 | orchestrator | changed: [testbed-manager] 2025-07-12 13:37:54.753109 | orchestrator | 2025-07-12 13:37:54.753121 | orchestrator | TASK [Pull other images] ******************************************************* 2025-07-12 13:37:54.753132 | orchestrator | Saturday 12 July 2025 13:36:55 +0000 (0:01:07.116) 0:01:07.314 ********* 2025-07-12 13:37:54.753143 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-07-12 13:37:54.753158 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-07-12 13:37:54.753169 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-07-12 13:37:54.753180 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-07-12 13:37:54.753191 | orchestrator | changed: [testbed-manager] => (item=common) 2025-07-12 13:37:54.753202 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-07-12 13:37:54.753247 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-07-12 13:37:54.753259 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-07-12 13:37:54.753273 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-07-12 13:37:54.753284 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-07-12 13:37:54.753294 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-07-12 13:37:54.753305 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-07-12 13:37:54.753315 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-07-12 13:37:54.753325 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-07-12 13:37:54.753336 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-07-12 13:37:54.753346 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-07-12 13:37:54.753356 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-07-12 13:37:54.753367 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-07-12 13:37:54.753377 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-07-12 13:37:54.753387 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-07-12 13:37:54.753398 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-07-12 13:37:54.753408 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-07-12 13:37:54.753419 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-07-12 13:37:54.753429 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-07-12 13:37:54.753439 | orchestrator | 2025-07-12 13:37:54.753450 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:37:54.753461 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:37:54.753473 | orchestrator | 2025-07-12 13:37:54.753484 | orchestrator | 2025-07-12 13:37:54.753495 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:37:54.753505 | orchestrator | Saturday 12 July 2025 13:37:54 +0000 (0:00:59.152) 0:02:06.466 ********* 2025-07-12 13:37:54.753516 | orchestrator | =============================================================================== 2025-07-12 13:37:54.753526 | orchestrator | Pull keystone image ---------------------------------------------------- 67.12s 2025-07-12 13:37:54.753537 | orchestrator | Pull other images ------------------------------------------------------ 59.15s 2025-07-12 13:37:57.097161 | orchestrator | 2025-07-12 13:37:57 | INFO  | Trying to run play wipe-partitions in environment custom 2025-07-12 13:38:07.226986 | orchestrator | 2025-07-12 13:38:07 | INFO  | Task 5cdd4e31-7bb9-4595-bf43-bb640bdfa853 (wipe-partitions) was prepared for execution. 2025-07-12 13:38:07.227105 | orchestrator | 2025-07-12 13:38:07 | INFO  | It takes a moment until task 5cdd4e31-7bb9-4595-bf43-bb640bdfa853 (wipe-partitions) has been started and output is visible here. 2025-07-12 13:38:20.456898 | orchestrator | 2025-07-12 13:38:20.457027 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-07-12 13:38:20.457044 | orchestrator | 2025-07-12 13:38:20.457056 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-07-12 13:38:20.457082 | orchestrator | Saturday 12 July 2025 13:38:11 +0000 (0:00:00.139) 0:00:00.139 ********* 2025-07-12 13:38:20.457094 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:38:20.457106 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:38:20.457117 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:38:20.457128 | orchestrator | 2025-07-12 13:38:20.457138 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-07-12 13:38:20.457149 | orchestrator | Saturday 12 July 2025 13:38:11 +0000 (0:00:00.576) 0:00:00.715 ********* 2025-07-12 13:38:20.457160 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:38:20.457171 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:38:20.457181 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:38:20.457192 | orchestrator | 2025-07-12 13:38:20.457203 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-07-12 13:38:20.457239 | orchestrator | Saturday 12 July 2025 13:38:11 +0000 (0:00:00.255) 0:00:00.971 ********* 2025-07-12 13:38:20.457251 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:38:20.457263 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:38:20.457274 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:38:20.457284 | orchestrator | 2025-07-12 13:38:20.457295 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-07-12 13:38:20.457306 | orchestrator | Saturday 12 July 2025 13:38:12 +0000 (0:00:00.739) 0:00:01.710 ********* 2025-07-12 13:38:20.457316 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:38:20.457327 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:38:20.457338 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:38:20.457348 | orchestrator | 2025-07-12 13:38:20.457359 | orchestrator | TASK [Check device availability] *********************************************** 2025-07-12 13:38:20.457370 | orchestrator | Saturday 12 July 2025 13:38:13 +0000 (0:00:00.291) 0:00:02.002 ********* 2025-07-12 13:38:20.457380 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-07-12 13:38:20.457391 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-07-12 13:38:20.457402 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-07-12 13:38:20.457413 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-07-12 13:38:20.457426 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-07-12 13:38:20.457442 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-07-12 13:38:20.457455 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-07-12 13:38:20.457467 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-07-12 13:38:20.457479 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-07-12 13:38:20.457489 | orchestrator | 2025-07-12 13:38:20.457500 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-07-12 13:38:20.457511 | orchestrator | Saturday 12 July 2025 13:38:15 +0000 (0:00:02.217) 0:00:04.220 ********* 2025-07-12 13:38:20.457522 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-07-12 13:38:20.457532 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-07-12 13:38:20.457543 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-07-12 13:38:20.457554 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-07-12 13:38:20.457564 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-07-12 13:38:20.457575 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-07-12 13:38:20.457585 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-07-12 13:38:20.457596 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-07-12 13:38:20.457606 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-07-12 13:38:20.457617 | orchestrator | 2025-07-12 13:38:20.457628 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-07-12 13:38:20.457639 | orchestrator | Saturday 12 July 2025 13:38:16 +0000 (0:00:01.338) 0:00:05.558 ********* 2025-07-12 13:38:20.457649 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-07-12 13:38:20.457660 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-07-12 13:38:20.457670 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-07-12 13:38:20.457713 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-07-12 13:38:20.457724 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-07-12 13:38:20.457734 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-07-12 13:38:20.457745 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-07-12 13:38:20.457755 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-07-12 13:38:20.457766 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-07-12 13:38:20.457776 | orchestrator | 2025-07-12 13:38:20.457787 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-07-12 13:38:20.457798 | orchestrator | Saturday 12 July 2025 13:38:18 +0000 (0:00:02.264) 0:00:07.822 ********* 2025-07-12 13:38:20.457808 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:38:20.457827 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:38:20.457838 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:38:20.457848 | orchestrator | 2025-07-12 13:38:20.457858 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-07-12 13:38:20.457869 | orchestrator | Saturday 12 July 2025 13:38:19 +0000 (0:00:00.622) 0:00:08.445 ********* 2025-07-12 13:38:20.457880 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:38:20.457890 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:38:20.457900 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:38:20.457911 | orchestrator | 2025-07-12 13:38:20.457921 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:38:20.457933 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:38:20.457945 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:38:20.457973 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:38:20.457984 | orchestrator | 2025-07-12 13:38:20.457995 | orchestrator | 2025-07-12 13:38:20.458011 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:38:20.458131 | orchestrator | Saturday 12 July 2025 13:38:20 +0000 (0:00:00.634) 0:00:09.079 ********* 2025-07-12 13:38:20.458143 | orchestrator | =============================================================================== 2025-07-12 13:38:20.458154 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.26s 2025-07-12 13:38:20.458165 | orchestrator | Check device availability ----------------------------------------------- 2.22s 2025-07-12 13:38:20.458175 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.34s 2025-07-12 13:38:20.458186 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.74s 2025-07-12 13:38:20.458196 | orchestrator | Request device events from the kernel ----------------------------------- 0.63s 2025-07-12 13:38:20.458206 | orchestrator | Reload udev rules ------------------------------------------------------- 0.62s 2025-07-12 13:38:20.458217 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.58s 2025-07-12 13:38:20.458227 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.29s 2025-07-12 13:38:20.458237 | orchestrator | Remove all rook related logical devices --------------------------------- 0.26s 2025-07-12 13:38:32.856671 | orchestrator | 2025-07-12 13:38:32 | INFO  | Task 77a453ed-b9d5-482b-be87-fa7d46c12c25 (facts) was prepared for execution. 2025-07-12 13:38:32.856852 | orchestrator | 2025-07-12 13:38:32 | INFO  | It takes a moment until task 77a453ed-b9d5-482b-be87-fa7d46c12c25 (facts) has been started and output is visible here. 2025-07-12 13:38:46.552341 | orchestrator | 2025-07-12 13:38:46.552456 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-07-12 13:38:46.552473 | orchestrator | 2025-07-12 13:38:46.552485 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-12 13:38:46.552497 | orchestrator | Saturday 12 July 2025 13:38:37 +0000 (0:00:00.283) 0:00:00.283 ********* 2025-07-12 13:38:46.552508 | orchestrator | ok: [testbed-manager] 2025-07-12 13:38:46.552520 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:38:46.552531 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:38:46.552542 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:38:46.552552 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:38:46.552563 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:38:46.552573 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:38:46.552584 | orchestrator | 2025-07-12 13:38:46.552595 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-12 13:38:46.552606 | orchestrator | Saturday 12 July 2025 13:38:38 +0000 (0:00:01.193) 0:00:01.477 ********* 2025-07-12 13:38:46.552645 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:38:46.552657 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:38:46.552667 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:38:46.552678 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:38:46.552716 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:38:46.552726 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:38:46.552737 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:38:46.552748 | orchestrator | 2025-07-12 13:38:46.552758 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-12 13:38:46.552769 | orchestrator | 2025-07-12 13:38:46.552779 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 13:38:46.552790 | orchestrator | Saturday 12 July 2025 13:38:39 +0000 (0:00:01.287) 0:00:02.765 ********* 2025-07-12 13:38:46.552801 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:38:46.552811 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:38:46.552822 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:38:46.552836 | orchestrator | ok: [testbed-manager] 2025-07-12 13:38:46.552846 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:38:46.552857 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:38:46.552867 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:38:46.552880 | orchestrator | 2025-07-12 13:38:46.552892 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-12 13:38:46.552904 | orchestrator | 2025-07-12 13:38:46.552916 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-12 13:38:46.552929 | orchestrator | Saturday 12 July 2025 13:38:45 +0000 (0:00:05.863) 0:00:08.628 ********* 2025-07-12 13:38:46.552942 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:38:46.552954 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:38:46.552967 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:38:46.552978 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:38:46.552990 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:38:46.553002 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:38:46.553014 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:38:46.553025 | orchestrator | 2025-07-12 13:38:46.553037 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:38:46.553050 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:38:46.553063 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:38:46.553075 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:38:46.553088 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:38:46.553115 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:38:46.553128 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:38:46.553140 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:38:46.553152 | orchestrator | 2025-07-12 13:38:46.553164 | orchestrator | 2025-07-12 13:38:46.553176 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:38:46.553188 | orchestrator | Saturday 12 July 2025 13:38:46 +0000 (0:00:00.616) 0:00:09.245 ********* 2025-07-12 13:38:46.553201 | orchestrator | =============================================================================== 2025-07-12 13:38:46.553214 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.86s 2025-07-12 13:38:46.553236 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.29s 2025-07-12 13:38:46.553247 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.19s 2025-07-12 13:38:46.553257 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.62s 2025-07-12 13:38:48.894288 | orchestrator | 2025-07-12 13:38:48 | INFO  | Task 3201eb06-85e0-458b-9fa6-447ea6cc8a11 (ceph-configure-lvm-volumes) was prepared for execution. 2025-07-12 13:38:48.894398 | orchestrator | 2025-07-12 13:38:48 | INFO  | It takes a moment until task 3201eb06-85e0-458b-9fa6-447ea6cc8a11 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-07-12 13:39:01.389640 | orchestrator | 2025-07-12 13:39:01.389782 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-07-12 13:39:01.389801 | orchestrator | 2025-07-12 13:39:01.389814 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 13:39:01.389825 | orchestrator | Saturday 12 July 2025 13:38:53 +0000 (0:00:00.353) 0:00:00.353 ********* 2025-07-12 13:39:01.389837 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 13:39:01.389848 | orchestrator | 2025-07-12 13:39:01.389861 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-12 13:39:01.389885 | orchestrator | Saturday 12 July 2025 13:38:53 +0000 (0:00:00.268) 0:00:00.621 ********* 2025-07-12 13:39:01.389897 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:39:01.389919 | orchestrator | 2025-07-12 13:39:01.389931 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:01.389942 | orchestrator | Saturday 12 July 2025 13:38:53 +0000 (0:00:00.236) 0:00:00.857 ********* 2025-07-12 13:39:01.389953 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-07-12 13:39:01.389964 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-07-12 13:39:01.389975 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-07-12 13:39:01.389986 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-07-12 13:39:01.389996 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-07-12 13:39:01.390007 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-07-12 13:39:01.390072 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-07-12 13:39:01.390084 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-07-12 13:39:01.390095 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-07-12 13:39:01.390106 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-07-12 13:39:01.390116 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-07-12 13:39:01.390127 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-07-12 13:39:01.390138 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-07-12 13:39:01.390151 | orchestrator | 2025-07-12 13:39:01.390163 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:01.390176 | orchestrator | Saturday 12 July 2025 13:38:54 +0000 (0:00:00.374) 0:00:01.232 ********* 2025-07-12 13:39:01.390188 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:01.390201 | orchestrator | 2025-07-12 13:39:01.390214 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:01.390240 | orchestrator | Saturday 12 July 2025 13:38:54 +0000 (0:00:00.518) 0:00:01.751 ********* 2025-07-12 13:39:01.390253 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:01.390265 | orchestrator | 2025-07-12 13:39:01.390277 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:01.390313 | orchestrator | Saturday 12 July 2025 13:38:54 +0000 (0:00:00.199) 0:00:01.950 ********* 2025-07-12 13:39:01.390326 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:01.390339 | orchestrator | 2025-07-12 13:39:01.390351 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:01.390364 | orchestrator | Saturday 12 July 2025 13:38:55 +0000 (0:00:00.205) 0:00:02.156 ********* 2025-07-12 13:39:01.390376 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:01.390388 | orchestrator | 2025-07-12 13:39:01.390401 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:01.390413 | orchestrator | Saturday 12 July 2025 13:38:55 +0000 (0:00:00.187) 0:00:02.343 ********* 2025-07-12 13:39:01.390425 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:01.390438 | orchestrator | 2025-07-12 13:39:01.390450 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:01.390462 | orchestrator | Saturday 12 July 2025 13:38:55 +0000 (0:00:00.198) 0:00:02.542 ********* 2025-07-12 13:39:01.390474 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:01.390487 | orchestrator | 2025-07-12 13:39:01.390500 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:01.390511 | orchestrator | Saturday 12 July 2025 13:38:55 +0000 (0:00:00.211) 0:00:02.753 ********* 2025-07-12 13:39:01.390522 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:01.390532 | orchestrator | 2025-07-12 13:39:01.390543 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:01.390554 | orchestrator | Saturday 12 July 2025 13:38:55 +0000 (0:00:00.193) 0:00:02.947 ********* 2025-07-12 13:39:01.390564 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:01.390575 | orchestrator | 2025-07-12 13:39:01.390586 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:01.390597 | orchestrator | Saturday 12 July 2025 13:38:56 +0000 (0:00:00.218) 0:00:03.165 ********* 2025-07-12 13:39:01.390607 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70) 2025-07-12 13:39:01.390619 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70) 2025-07-12 13:39:01.390630 | orchestrator | 2025-07-12 13:39:01.390640 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:01.390651 | orchestrator | Saturday 12 July 2025 13:38:56 +0000 (0:00:00.427) 0:00:03.592 ********* 2025-07-12 13:39:01.390679 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_cf6824d0-2336-4864-a32f-bffef7606523) 2025-07-12 13:39:01.390707 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_cf6824d0-2336-4864-a32f-bffef7606523) 2025-07-12 13:39:01.390718 | orchestrator | 2025-07-12 13:39:01.390730 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:01.390740 | orchestrator | Saturday 12 July 2025 13:38:57 +0000 (0:00:00.445) 0:00:04.038 ********* 2025-07-12 13:39:01.390751 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_bad1a367-9870-4c1b-af18-4999b26662c8) 2025-07-12 13:39:01.390762 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_bad1a367-9870-4c1b-af18-4999b26662c8) 2025-07-12 13:39:01.390772 | orchestrator | 2025-07-12 13:39:01.390783 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:01.390793 | orchestrator | Saturday 12 July 2025 13:38:57 +0000 (0:00:00.727) 0:00:04.766 ********* 2025-07-12 13:39:01.390804 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ec46bf14-c827-46d0-9a8c-19525aeacad6) 2025-07-12 13:39:01.390814 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ec46bf14-c827-46d0-9a8c-19525aeacad6) 2025-07-12 13:39:01.390825 | orchestrator | 2025-07-12 13:39:01.390835 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:01.390846 | orchestrator | Saturday 12 July 2025 13:38:58 +0000 (0:00:00.629) 0:00:05.396 ********* 2025-07-12 13:39:01.390865 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-12 13:39:01.390876 | orchestrator | 2025-07-12 13:39:01.390886 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:01.390897 | orchestrator | Saturday 12 July 2025 13:38:59 +0000 (0:00:00.804) 0:00:06.200 ********* 2025-07-12 13:39:01.390907 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-07-12 13:39:01.390918 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-07-12 13:39:01.390928 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-07-12 13:39:01.390939 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-07-12 13:39:01.390949 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-07-12 13:39:01.390960 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-07-12 13:39:01.390970 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-07-12 13:39:01.390981 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-07-12 13:39:01.390992 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-07-12 13:39:01.391002 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-07-12 13:39:01.391012 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-07-12 13:39:01.391023 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-07-12 13:39:01.391046 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-07-12 13:39:01.391057 | orchestrator | 2025-07-12 13:39:01.391067 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:01.391078 | orchestrator | Saturday 12 July 2025 13:38:59 +0000 (0:00:00.417) 0:00:06.618 ********* 2025-07-12 13:39:01.391089 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:01.391099 | orchestrator | 2025-07-12 13:39:01.391110 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:01.391120 | orchestrator | Saturday 12 July 2025 13:38:59 +0000 (0:00:00.203) 0:00:06.822 ********* 2025-07-12 13:39:01.391131 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:01.391141 | orchestrator | 2025-07-12 13:39:01.391152 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:01.391162 | orchestrator | Saturday 12 July 2025 13:39:00 +0000 (0:00:00.214) 0:00:07.036 ********* 2025-07-12 13:39:01.391173 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:01.391183 | orchestrator | 2025-07-12 13:39:01.391194 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:01.391204 | orchestrator | Saturday 12 July 2025 13:39:00 +0000 (0:00:00.198) 0:00:07.235 ********* 2025-07-12 13:39:01.391215 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:01.391225 | orchestrator | 2025-07-12 13:39:01.391236 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:01.391246 | orchestrator | Saturday 12 July 2025 13:39:00 +0000 (0:00:00.209) 0:00:07.445 ********* 2025-07-12 13:39:01.391257 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:01.391268 | orchestrator | 2025-07-12 13:39:01.391278 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:01.391289 | orchestrator | Saturday 12 July 2025 13:39:00 +0000 (0:00:00.244) 0:00:07.689 ********* 2025-07-12 13:39:01.391299 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:01.391310 | orchestrator | 2025-07-12 13:39:01.391320 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:01.391331 | orchestrator | Saturday 12 July 2025 13:39:00 +0000 (0:00:00.205) 0:00:07.895 ********* 2025-07-12 13:39:01.391349 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:01.391359 | orchestrator | 2025-07-12 13:39:01.391370 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:01.391381 | orchestrator | Saturday 12 July 2025 13:39:01 +0000 (0:00:00.261) 0:00:08.157 ********* 2025-07-12 13:39:01.391398 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:09.717844 | orchestrator | 2025-07-12 13:39:09.718074 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:09.718100 | orchestrator | Saturday 12 July 2025 13:39:01 +0000 (0:00:00.224) 0:00:08.381 ********* 2025-07-12 13:39:09.718112 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-07-12 13:39:09.718124 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-07-12 13:39:09.718136 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-07-12 13:39:09.718147 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-07-12 13:39:09.718158 | orchestrator | 2025-07-12 13:39:09.718169 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:09.718180 | orchestrator | Saturday 12 July 2025 13:39:02 +0000 (0:00:01.273) 0:00:09.655 ********* 2025-07-12 13:39:09.718191 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:09.718203 | orchestrator | 2025-07-12 13:39:09.718214 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:09.718225 | orchestrator | Saturday 12 July 2025 13:39:02 +0000 (0:00:00.217) 0:00:09.872 ********* 2025-07-12 13:39:09.718236 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:09.718246 | orchestrator | 2025-07-12 13:39:09.718257 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:09.718268 | orchestrator | Saturday 12 July 2025 13:39:03 +0000 (0:00:00.199) 0:00:10.072 ********* 2025-07-12 13:39:09.718279 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:09.718290 | orchestrator | 2025-07-12 13:39:09.718322 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:09.718336 | orchestrator | Saturday 12 July 2025 13:39:03 +0000 (0:00:00.197) 0:00:10.269 ********* 2025-07-12 13:39:09.718349 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:09.718362 | orchestrator | 2025-07-12 13:39:09.718374 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-07-12 13:39:09.718387 | orchestrator | Saturday 12 July 2025 13:39:03 +0000 (0:00:00.214) 0:00:10.484 ********* 2025-07-12 13:39:09.718399 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-07-12 13:39:09.718412 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-07-12 13:39:09.718424 | orchestrator | 2025-07-12 13:39:09.718437 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-07-12 13:39:09.718450 | orchestrator | Saturday 12 July 2025 13:39:03 +0000 (0:00:00.181) 0:00:10.666 ********* 2025-07-12 13:39:09.718462 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:09.718474 | orchestrator | 2025-07-12 13:39:09.718487 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-07-12 13:39:09.718499 | orchestrator | Saturday 12 July 2025 13:39:03 +0000 (0:00:00.148) 0:00:10.814 ********* 2025-07-12 13:39:09.718511 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:09.718524 | orchestrator | 2025-07-12 13:39:09.718536 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-07-12 13:39:09.718549 | orchestrator | Saturday 12 July 2025 13:39:03 +0000 (0:00:00.140) 0:00:10.954 ********* 2025-07-12 13:39:09.718562 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:09.718574 | orchestrator | 2025-07-12 13:39:09.718586 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-07-12 13:39:09.718599 | orchestrator | Saturday 12 July 2025 13:39:04 +0000 (0:00:00.150) 0:00:11.104 ********* 2025-07-12 13:39:09.718613 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:39:09.718625 | orchestrator | 2025-07-12 13:39:09.718638 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-07-12 13:39:09.718689 | orchestrator | Saturday 12 July 2025 13:39:04 +0000 (0:00:00.151) 0:00:11.256 ********* 2025-07-12 13:39:09.718746 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f86cb3d6-0e78-5b6a-8369-843476bf59dc'}}) 2025-07-12 13:39:09.718767 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a'}}) 2025-07-12 13:39:09.718786 | orchestrator | 2025-07-12 13:39:09.718804 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-07-12 13:39:09.718822 | orchestrator | Saturday 12 July 2025 13:39:04 +0000 (0:00:00.175) 0:00:11.432 ********* 2025-07-12 13:39:09.718834 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f86cb3d6-0e78-5b6a-8369-843476bf59dc'}})  2025-07-12 13:39:09.718853 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a'}})  2025-07-12 13:39:09.718864 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:09.718875 | orchestrator | 2025-07-12 13:39:09.718885 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-07-12 13:39:09.718896 | orchestrator | Saturday 12 July 2025 13:39:04 +0000 (0:00:00.152) 0:00:11.584 ********* 2025-07-12 13:39:09.718907 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f86cb3d6-0e78-5b6a-8369-843476bf59dc'}})  2025-07-12 13:39:09.718917 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a'}})  2025-07-12 13:39:09.718928 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:09.718939 | orchestrator | 2025-07-12 13:39:09.718949 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-07-12 13:39:09.718960 | orchestrator | Saturday 12 July 2025 13:39:04 +0000 (0:00:00.162) 0:00:11.746 ********* 2025-07-12 13:39:09.718971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f86cb3d6-0e78-5b6a-8369-843476bf59dc'}})  2025-07-12 13:39:09.718982 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a'}})  2025-07-12 13:39:09.718993 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:09.719004 | orchestrator | 2025-07-12 13:39:09.719035 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-07-12 13:39:09.719047 | orchestrator | Saturday 12 July 2025 13:39:05 +0000 (0:00:00.364) 0:00:12.111 ********* 2025-07-12 13:39:09.719057 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:39:09.719068 | orchestrator | 2025-07-12 13:39:09.719078 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-07-12 13:39:09.719089 | orchestrator | Saturday 12 July 2025 13:39:05 +0000 (0:00:00.147) 0:00:12.258 ********* 2025-07-12 13:39:09.719100 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:39:09.719110 | orchestrator | 2025-07-12 13:39:09.719121 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-07-12 13:39:09.719131 | orchestrator | Saturday 12 July 2025 13:39:05 +0000 (0:00:00.144) 0:00:12.403 ********* 2025-07-12 13:39:09.719142 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:09.719152 | orchestrator | 2025-07-12 13:39:09.719163 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-07-12 13:39:09.719174 | orchestrator | Saturday 12 July 2025 13:39:05 +0000 (0:00:00.143) 0:00:12.547 ********* 2025-07-12 13:39:09.719184 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:09.719195 | orchestrator | 2025-07-12 13:39:09.719205 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-07-12 13:39:09.719216 | orchestrator | Saturday 12 July 2025 13:39:05 +0000 (0:00:00.133) 0:00:12.681 ********* 2025-07-12 13:39:09.719227 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:09.719237 | orchestrator | 2025-07-12 13:39:09.719248 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-07-12 13:39:09.719258 | orchestrator | Saturday 12 July 2025 13:39:05 +0000 (0:00:00.139) 0:00:12.820 ********* 2025-07-12 13:39:09.719280 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 13:39:09.719291 | orchestrator |  "ceph_osd_devices": { 2025-07-12 13:39:09.719302 | orchestrator |  "sdb": { 2025-07-12 13:39:09.719313 | orchestrator |  "osd_lvm_uuid": "f86cb3d6-0e78-5b6a-8369-843476bf59dc" 2025-07-12 13:39:09.719323 | orchestrator |  }, 2025-07-12 13:39:09.719334 | orchestrator |  "sdc": { 2025-07-12 13:39:09.719345 | orchestrator |  "osd_lvm_uuid": "8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a" 2025-07-12 13:39:09.719355 | orchestrator |  } 2025-07-12 13:39:09.719366 | orchestrator |  } 2025-07-12 13:39:09.719377 | orchestrator | } 2025-07-12 13:39:09.719388 | orchestrator | 2025-07-12 13:39:09.719398 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-07-12 13:39:09.719409 | orchestrator | Saturday 12 July 2025 13:39:05 +0000 (0:00:00.144) 0:00:12.964 ********* 2025-07-12 13:39:09.719420 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:09.719430 | orchestrator | 2025-07-12 13:39:09.719441 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-07-12 13:39:09.719451 | orchestrator | Saturday 12 July 2025 13:39:06 +0000 (0:00:00.139) 0:00:13.104 ********* 2025-07-12 13:39:09.719462 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:09.719472 | orchestrator | 2025-07-12 13:39:09.719483 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-07-12 13:39:09.719494 | orchestrator | Saturday 12 July 2025 13:39:06 +0000 (0:00:00.161) 0:00:13.265 ********* 2025-07-12 13:39:09.719505 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:09.719515 | orchestrator | 2025-07-12 13:39:09.719526 | orchestrator | TASK [Print configuration data] ************************************************ 2025-07-12 13:39:09.719536 | orchestrator | Saturday 12 July 2025 13:39:06 +0000 (0:00:00.162) 0:00:13.427 ********* 2025-07-12 13:39:09.719547 | orchestrator | changed: [testbed-node-3] => { 2025-07-12 13:39:09.719557 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-07-12 13:39:09.719568 | orchestrator |  "ceph_osd_devices": { 2025-07-12 13:39:09.719578 | orchestrator |  "sdb": { 2025-07-12 13:39:09.719589 | orchestrator |  "osd_lvm_uuid": "f86cb3d6-0e78-5b6a-8369-843476bf59dc" 2025-07-12 13:39:09.719600 | orchestrator |  }, 2025-07-12 13:39:09.719611 | orchestrator |  "sdc": { 2025-07-12 13:39:09.719621 | orchestrator |  "osd_lvm_uuid": "8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a" 2025-07-12 13:39:09.719632 | orchestrator |  } 2025-07-12 13:39:09.719643 | orchestrator |  }, 2025-07-12 13:39:09.719658 | orchestrator |  "lvm_volumes": [ 2025-07-12 13:39:09.719669 | orchestrator |  { 2025-07-12 13:39:09.719680 | orchestrator |  "data": "osd-block-f86cb3d6-0e78-5b6a-8369-843476bf59dc", 2025-07-12 13:39:09.719691 | orchestrator |  "data_vg": "ceph-f86cb3d6-0e78-5b6a-8369-843476bf59dc" 2025-07-12 13:39:09.719734 | orchestrator |  }, 2025-07-12 13:39:09.719755 | orchestrator |  { 2025-07-12 13:39:09.719773 | orchestrator |  "data": "osd-block-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a", 2025-07-12 13:39:09.719784 | orchestrator |  "data_vg": "ceph-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a" 2025-07-12 13:39:09.719794 | orchestrator |  } 2025-07-12 13:39:09.719805 | orchestrator |  ] 2025-07-12 13:39:09.719815 | orchestrator |  } 2025-07-12 13:39:09.719833 | orchestrator | } 2025-07-12 13:39:09.719844 | orchestrator | 2025-07-12 13:39:09.719854 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-07-12 13:39:09.719865 | orchestrator | Saturday 12 July 2025 13:39:06 +0000 (0:00:00.261) 0:00:13.688 ********* 2025-07-12 13:39:09.719876 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 13:39:09.719886 | orchestrator | 2025-07-12 13:39:09.719897 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-07-12 13:39:09.719907 | orchestrator | 2025-07-12 13:39:09.719918 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 13:39:09.719937 | orchestrator | Saturday 12 July 2025 13:39:09 +0000 (0:00:02.429) 0:00:16.118 ********* 2025-07-12 13:39:09.719948 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-07-12 13:39:09.719959 | orchestrator | 2025-07-12 13:39:09.719969 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-12 13:39:09.719980 | orchestrator | Saturday 12 July 2025 13:39:09 +0000 (0:00:00.313) 0:00:16.431 ********* 2025-07-12 13:39:09.719991 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:39:09.720001 | orchestrator | 2025-07-12 13:39:09.720012 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:09.720031 | orchestrator | Saturday 12 July 2025 13:39:09 +0000 (0:00:00.272) 0:00:16.704 ********* 2025-07-12 13:39:17.876645 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-07-12 13:39:17.876855 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-07-12 13:39:17.876872 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-07-12 13:39:17.876884 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-07-12 13:39:17.876895 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-07-12 13:39:17.876906 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-07-12 13:39:17.876916 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-07-12 13:39:17.876927 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-07-12 13:39:17.876938 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-07-12 13:39:17.876948 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-07-12 13:39:17.876959 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-07-12 13:39:17.876969 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-07-12 13:39:17.876980 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-07-12 13:39:17.876991 | orchestrator | 2025-07-12 13:39:17.877003 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:17.877015 | orchestrator | Saturday 12 July 2025 13:39:10 +0000 (0:00:00.410) 0:00:17.114 ********* 2025-07-12 13:39:17.877043 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:17.877055 | orchestrator | 2025-07-12 13:39:17.877066 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:17.877077 | orchestrator | Saturday 12 July 2025 13:39:10 +0000 (0:00:00.246) 0:00:17.360 ********* 2025-07-12 13:39:17.877088 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:17.877099 | orchestrator | 2025-07-12 13:39:17.877110 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:17.877120 | orchestrator | Saturday 12 July 2025 13:39:10 +0000 (0:00:00.210) 0:00:17.571 ********* 2025-07-12 13:39:17.877131 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:17.877142 | orchestrator | 2025-07-12 13:39:17.877153 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:17.877164 | orchestrator | Saturday 12 July 2025 13:39:10 +0000 (0:00:00.224) 0:00:17.796 ********* 2025-07-12 13:39:17.877176 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:17.877188 | orchestrator | 2025-07-12 13:39:17.877200 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:17.877211 | orchestrator | Saturday 12 July 2025 13:39:11 +0000 (0:00:00.214) 0:00:18.011 ********* 2025-07-12 13:39:17.877223 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:17.877235 | orchestrator | 2025-07-12 13:39:17.877247 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:17.877288 | orchestrator | Saturday 12 July 2025 13:39:11 +0000 (0:00:00.213) 0:00:18.224 ********* 2025-07-12 13:39:17.877301 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:17.877313 | orchestrator | 2025-07-12 13:39:17.877325 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:17.877337 | orchestrator | Saturday 12 July 2025 13:39:11 +0000 (0:00:00.754) 0:00:18.979 ********* 2025-07-12 13:39:17.877349 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:17.877361 | orchestrator | 2025-07-12 13:39:17.877373 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:17.877385 | orchestrator | Saturday 12 July 2025 13:39:12 +0000 (0:00:00.229) 0:00:19.208 ********* 2025-07-12 13:39:17.877413 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:17.877426 | orchestrator | 2025-07-12 13:39:17.877439 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:17.877451 | orchestrator | Saturday 12 July 2025 13:39:12 +0000 (0:00:00.202) 0:00:19.411 ********* 2025-07-12 13:39:17.877463 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1) 2025-07-12 13:39:17.877476 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1) 2025-07-12 13:39:17.877488 | orchestrator | 2025-07-12 13:39:17.877500 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:17.877513 | orchestrator | Saturday 12 July 2025 13:39:12 +0000 (0:00:00.435) 0:00:19.846 ********* 2025-07-12 13:39:17.877525 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b303d5ed-b20f-4882-90f3-23adead236a1) 2025-07-12 13:39:17.877537 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b303d5ed-b20f-4882-90f3-23adead236a1) 2025-07-12 13:39:17.877548 | orchestrator | 2025-07-12 13:39:17.877559 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:17.877582 | orchestrator | Saturday 12 July 2025 13:39:13 +0000 (0:00:00.447) 0:00:20.294 ********* 2025-07-12 13:39:17.877594 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_751344b4-b2fa-492b-b080-e9e5b4c67369) 2025-07-12 13:39:17.877604 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_751344b4-b2fa-492b-b080-e9e5b4c67369) 2025-07-12 13:39:17.877615 | orchestrator | 2025-07-12 13:39:17.877626 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:17.877637 | orchestrator | Saturday 12 July 2025 13:39:13 +0000 (0:00:00.427) 0:00:20.721 ********* 2025-07-12 13:39:17.877667 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_11616408-d26b-4882-b347-f5b812b9aa41) 2025-07-12 13:39:17.877679 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_11616408-d26b-4882-b347-f5b812b9aa41) 2025-07-12 13:39:17.877690 | orchestrator | 2025-07-12 13:39:17.877723 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:17.877735 | orchestrator | Saturday 12 July 2025 13:39:14 +0000 (0:00:00.441) 0:00:21.163 ********* 2025-07-12 13:39:17.877746 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-12 13:39:17.877756 | orchestrator | 2025-07-12 13:39:17.877767 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:17.877778 | orchestrator | Saturday 12 July 2025 13:39:14 +0000 (0:00:00.374) 0:00:21.537 ********* 2025-07-12 13:39:17.877788 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-07-12 13:39:17.877799 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-07-12 13:39:17.877809 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-07-12 13:39:17.877820 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-07-12 13:39:17.877830 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-07-12 13:39:17.877851 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-07-12 13:39:17.877862 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-07-12 13:39:17.877873 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-07-12 13:39:17.877883 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-07-12 13:39:17.877894 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-07-12 13:39:17.877904 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-07-12 13:39:17.877915 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-07-12 13:39:17.877925 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-07-12 13:39:17.877936 | orchestrator | 2025-07-12 13:39:17.877946 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:17.877957 | orchestrator | Saturday 12 July 2025 13:39:14 +0000 (0:00:00.377) 0:00:21.915 ********* 2025-07-12 13:39:17.877967 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:17.877978 | orchestrator | 2025-07-12 13:39:17.877989 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:17.877999 | orchestrator | Saturday 12 July 2025 13:39:15 +0000 (0:00:00.198) 0:00:22.113 ********* 2025-07-12 13:39:17.878010 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:17.878087 | orchestrator | 2025-07-12 13:39:17.878099 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:17.878110 | orchestrator | Saturday 12 July 2025 13:39:15 +0000 (0:00:00.648) 0:00:22.762 ********* 2025-07-12 13:39:17.878121 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:17.878131 | orchestrator | 2025-07-12 13:39:17.878142 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:17.878153 | orchestrator | Saturday 12 July 2025 13:39:15 +0000 (0:00:00.204) 0:00:22.966 ********* 2025-07-12 13:39:17.878164 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:17.878174 | orchestrator | 2025-07-12 13:39:17.878185 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:17.878196 | orchestrator | Saturday 12 July 2025 13:39:16 +0000 (0:00:00.232) 0:00:23.198 ********* 2025-07-12 13:39:17.878213 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:17.878224 | orchestrator | 2025-07-12 13:39:17.878235 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:17.878246 | orchestrator | Saturday 12 July 2025 13:39:16 +0000 (0:00:00.194) 0:00:23.393 ********* 2025-07-12 13:39:17.878256 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:17.878267 | orchestrator | 2025-07-12 13:39:17.878278 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:17.878289 | orchestrator | Saturday 12 July 2025 13:39:16 +0000 (0:00:00.200) 0:00:23.593 ********* 2025-07-12 13:39:17.878299 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:17.878310 | orchestrator | 2025-07-12 13:39:17.878321 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:17.878332 | orchestrator | Saturday 12 July 2025 13:39:16 +0000 (0:00:00.191) 0:00:23.785 ********* 2025-07-12 13:39:17.878342 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:17.878353 | orchestrator | 2025-07-12 13:39:17.878364 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:17.878375 | orchestrator | Saturday 12 July 2025 13:39:17 +0000 (0:00:00.220) 0:00:24.005 ********* 2025-07-12 13:39:17.878385 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-07-12 13:39:17.878397 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-07-12 13:39:17.878408 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-07-12 13:39:17.878418 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-07-12 13:39:17.878436 | orchestrator | 2025-07-12 13:39:17.878447 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:17.878458 | orchestrator | Saturday 12 July 2025 13:39:17 +0000 (0:00:00.661) 0:00:24.666 ********* 2025-07-12 13:39:17.878469 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:17.878479 | orchestrator | 2025-07-12 13:39:17.878499 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:24.446313 | orchestrator | Saturday 12 July 2025 13:39:17 +0000 (0:00:00.201) 0:00:24.867 ********* 2025-07-12 13:39:24.446432 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:24.446448 | orchestrator | 2025-07-12 13:39:24.446461 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:24.446472 | orchestrator | Saturday 12 July 2025 13:39:18 +0000 (0:00:00.199) 0:00:25.067 ********* 2025-07-12 13:39:24.446483 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:24.446494 | orchestrator | 2025-07-12 13:39:24.446505 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:24.446516 | orchestrator | Saturday 12 July 2025 13:39:18 +0000 (0:00:00.206) 0:00:25.274 ********* 2025-07-12 13:39:24.446526 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:24.446537 | orchestrator | 2025-07-12 13:39:24.446548 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-07-12 13:39:24.446560 | orchestrator | Saturday 12 July 2025 13:39:18 +0000 (0:00:00.210) 0:00:25.484 ********* 2025-07-12 13:39:24.446570 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-07-12 13:39:24.446581 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-07-12 13:39:24.446592 | orchestrator | 2025-07-12 13:39:24.446603 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-07-12 13:39:24.446613 | orchestrator | Saturday 12 July 2025 13:39:18 +0000 (0:00:00.342) 0:00:25.826 ********* 2025-07-12 13:39:24.446624 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:24.446635 | orchestrator | 2025-07-12 13:39:24.446645 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-07-12 13:39:24.446656 | orchestrator | Saturday 12 July 2025 13:39:18 +0000 (0:00:00.143) 0:00:25.969 ********* 2025-07-12 13:39:24.446666 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:24.446677 | orchestrator | 2025-07-12 13:39:24.446687 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-07-12 13:39:24.446720 | orchestrator | Saturday 12 July 2025 13:39:19 +0000 (0:00:00.148) 0:00:26.118 ********* 2025-07-12 13:39:24.446733 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:24.446743 | orchestrator | 2025-07-12 13:39:24.446754 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-07-12 13:39:24.446765 | orchestrator | Saturday 12 July 2025 13:39:19 +0000 (0:00:00.137) 0:00:26.255 ********* 2025-07-12 13:39:24.446775 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:39:24.446787 | orchestrator | 2025-07-12 13:39:24.446797 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-07-12 13:39:24.446808 | orchestrator | Saturday 12 July 2025 13:39:19 +0000 (0:00:00.158) 0:00:26.414 ********* 2025-07-12 13:39:24.446819 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8be3c046-75c4-5df6-b59b-0076bb3a4ccd'}}) 2025-07-12 13:39:24.446830 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f8ec8ce8-a083-5a5f-ae06-780cf5acbe42'}}) 2025-07-12 13:39:24.446841 | orchestrator | 2025-07-12 13:39:24.446853 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-07-12 13:39:24.446865 | orchestrator | Saturday 12 July 2025 13:39:19 +0000 (0:00:00.172) 0:00:26.587 ********* 2025-07-12 13:39:24.446877 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8be3c046-75c4-5df6-b59b-0076bb3a4ccd'}})  2025-07-12 13:39:24.446891 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f8ec8ce8-a083-5a5f-ae06-780cf5acbe42'}})  2025-07-12 13:39:24.446927 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:24.446940 | orchestrator | 2025-07-12 13:39:24.446953 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-07-12 13:39:24.446965 | orchestrator | Saturday 12 July 2025 13:39:19 +0000 (0:00:00.168) 0:00:26.755 ********* 2025-07-12 13:39:24.446978 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8be3c046-75c4-5df6-b59b-0076bb3a4ccd'}})  2025-07-12 13:39:24.446990 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f8ec8ce8-a083-5a5f-ae06-780cf5acbe42'}})  2025-07-12 13:39:24.447002 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:24.447014 | orchestrator | 2025-07-12 13:39:24.447025 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-07-12 13:39:24.447038 | orchestrator | Saturday 12 July 2025 13:39:19 +0000 (0:00:00.162) 0:00:26.918 ********* 2025-07-12 13:39:24.447050 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8be3c046-75c4-5df6-b59b-0076bb3a4ccd'}})  2025-07-12 13:39:24.447061 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f8ec8ce8-a083-5a5f-ae06-780cf5acbe42'}})  2025-07-12 13:39:24.447089 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:24.447102 | orchestrator | 2025-07-12 13:39:24.447114 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-07-12 13:39:24.447133 | orchestrator | Saturday 12 July 2025 13:39:20 +0000 (0:00:00.151) 0:00:27.069 ********* 2025-07-12 13:39:24.447146 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:39:24.447158 | orchestrator | 2025-07-12 13:39:24.447170 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-07-12 13:39:24.447183 | orchestrator | Saturday 12 July 2025 13:39:20 +0000 (0:00:00.150) 0:00:27.219 ********* 2025-07-12 13:39:24.447195 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:39:24.447207 | orchestrator | 2025-07-12 13:39:24.447218 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-07-12 13:39:24.447229 | orchestrator | Saturday 12 July 2025 13:39:20 +0000 (0:00:00.148) 0:00:27.368 ********* 2025-07-12 13:39:24.447240 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:24.447250 | orchestrator | 2025-07-12 13:39:24.447278 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-07-12 13:39:24.447290 | orchestrator | Saturday 12 July 2025 13:39:20 +0000 (0:00:00.142) 0:00:27.510 ********* 2025-07-12 13:39:24.447301 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:24.447311 | orchestrator | 2025-07-12 13:39:24.447322 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-07-12 13:39:24.447332 | orchestrator | Saturday 12 July 2025 13:39:20 +0000 (0:00:00.346) 0:00:27.856 ********* 2025-07-12 13:39:24.447343 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:24.447353 | orchestrator | 2025-07-12 13:39:24.447363 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-07-12 13:39:24.447374 | orchestrator | Saturday 12 July 2025 13:39:20 +0000 (0:00:00.138) 0:00:27.995 ********* 2025-07-12 13:39:24.447384 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 13:39:24.447395 | orchestrator |  "ceph_osd_devices": { 2025-07-12 13:39:24.447405 | orchestrator |  "sdb": { 2025-07-12 13:39:24.447416 | orchestrator |  "osd_lvm_uuid": "8be3c046-75c4-5df6-b59b-0076bb3a4ccd" 2025-07-12 13:39:24.447427 | orchestrator |  }, 2025-07-12 13:39:24.447438 | orchestrator |  "sdc": { 2025-07-12 13:39:24.447448 | orchestrator |  "osd_lvm_uuid": "f8ec8ce8-a083-5a5f-ae06-780cf5acbe42" 2025-07-12 13:39:24.447459 | orchestrator |  } 2025-07-12 13:39:24.447469 | orchestrator |  } 2025-07-12 13:39:24.447481 | orchestrator | } 2025-07-12 13:39:24.447491 | orchestrator | 2025-07-12 13:39:24.447502 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-07-12 13:39:24.447513 | orchestrator | Saturday 12 July 2025 13:39:21 +0000 (0:00:00.158) 0:00:28.154 ********* 2025-07-12 13:39:24.447532 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:24.447543 | orchestrator | 2025-07-12 13:39:24.447553 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-07-12 13:39:24.447564 | orchestrator | Saturday 12 July 2025 13:39:21 +0000 (0:00:00.147) 0:00:28.302 ********* 2025-07-12 13:39:24.447574 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:24.447585 | orchestrator | 2025-07-12 13:39:24.447595 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-07-12 13:39:24.447606 | orchestrator | Saturday 12 July 2025 13:39:21 +0000 (0:00:00.139) 0:00:28.442 ********* 2025-07-12 13:39:24.447617 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:24.447627 | orchestrator | 2025-07-12 13:39:24.447637 | orchestrator | TASK [Print configuration data] ************************************************ 2025-07-12 13:39:24.447648 | orchestrator | Saturday 12 July 2025 13:39:21 +0000 (0:00:00.149) 0:00:28.591 ********* 2025-07-12 13:39:24.447658 | orchestrator | changed: [testbed-node-4] => { 2025-07-12 13:39:24.447669 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-07-12 13:39:24.447679 | orchestrator |  "ceph_osd_devices": { 2025-07-12 13:39:24.447690 | orchestrator |  "sdb": { 2025-07-12 13:39:24.447733 | orchestrator |  "osd_lvm_uuid": "8be3c046-75c4-5df6-b59b-0076bb3a4ccd" 2025-07-12 13:39:24.447746 | orchestrator |  }, 2025-07-12 13:39:24.447756 | orchestrator |  "sdc": { 2025-07-12 13:39:24.447767 | orchestrator |  "osd_lvm_uuid": "f8ec8ce8-a083-5a5f-ae06-780cf5acbe42" 2025-07-12 13:39:24.447778 | orchestrator |  } 2025-07-12 13:39:24.447788 | orchestrator |  }, 2025-07-12 13:39:24.447799 | orchestrator |  "lvm_volumes": [ 2025-07-12 13:39:24.447809 | orchestrator |  { 2025-07-12 13:39:24.447820 | orchestrator |  "data": "osd-block-8be3c046-75c4-5df6-b59b-0076bb3a4ccd", 2025-07-12 13:39:24.447830 | orchestrator |  "data_vg": "ceph-8be3c046-75c4-5df6-b59b-0076bb3a4ccd" 2025-07-12 13:39:24.447841 | orchestrator |  }, 2025-07-12 13:39:24.447851 | orchestrator |  { 2025-07-12 13:39:24.447862 | orchestrator |  "data": "osd-block-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42", 2025-07-12 13:39:24.447872 | orchestrator |  "data_vg": "ceph-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42" 2025-07-12 13:39:24.447883 | orchestrator |  } 2025-07-12 13:39:24.447893 | orchestrator |  ] 2025-07-12 13:39:24.447904 | orchestrator |  } 2025-07-12 13:39:24.447914 | orchestrator | } 2025-07-12 13:39:24.447925 | orchestrator | 2025-07-12 13:39:24.447935 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-07-12 13:39:24.447946 | orchestrator | Saturday 12 July 2025 13:39:21 +0000 (0:00:00.215) 0:00:28.807 ********* 2025-07-12 13:39:24.447956 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-07-12 13:39:24.447967 | orchestrator | 2025-07-12 13:39:24.447977 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-07-12 13:39:24.447988 | orchestrator | 2025-07-12 13:39:24.447998 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 13:39:24.448009 | orchestrator | Saturday 12 July 2025 13:39:22 +0000 (0:00:01.105) 0:00:29.912 ********* 2025-07-12 13:39:24.448019 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-07-12 13:39:24.448030 | orchestrator | 2025-07-12 13:39:24.448040 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-12 13:39:24.448051 | orchestrator | Saturday 12 July 2025 13:39:23 +0000 (0:00:00.473) 0:00:30.386 ********* 2025-07-12 13:39:24.448061 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:39:24.448072 | orchestrator | 2025-07-12 13:39:24.448082 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:24.448093 | orchestrator | Saturday 12 July 2025 13:39:24 +0000 (0:00:00.652) 0:00:31.038 ********* 2025-07-12 13:39:24.448104 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-07-12 13:39:24.448121 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-07-12 13:39:24.448132 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-07-12 13:39:24.448142 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-07-12 13:39:24.448153 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-07-12 13:39:24.448163 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-07-12 13:39:24.448180 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-07-12 13:39:32.843481 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-07-12 13:39:32.843592 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-07-12 13:39:32.843607 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-07-12 13:39:32.843618 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-07-12 13:39:32.843649 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-07-12 13:39:32.843661 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-07-12 13:39:32.843672 | orchestrator | 2025-07-12 13:39:32.843684 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:32.843696 | orchestrator | Saturday 12 July 2025 13:39:24 +0000 (0:00:00.394) 0:00:31.432 ********* 2025-07-12 13:39:32.843757 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:32.843770 | orchestrator | 2025-07-12 13:39:32.843782 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:32.843793 | orchestrator | Saturday 12 July 2025 13:39:24 +0000 (0:00:00.202) 0:00:31.635 ********* 2025-07-12 13:39:32.843803 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:32.843814 | orchestrator | 2025-07-12 13:39:32.843825 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:32.843836 | orchestrator | Saturday 12 July 2025 13:39:24 +0000 (0:00:00.219) 0:00:31.854 ********* 2025-07-12 13:39:32.843846 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:32.843857 | orchestrator | 2025-07-12 13:39:32.843868 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:32.843878 | orchestrator | Saturday 12 July 2025 13:39:25 +0000 (0:00:00.237) 0:00:32.091 ********* 2025-07-12 13:39:32.843889 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:32.843899 | orchestrator | 2025-07-12 13:39:32.843910 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:32.843921 | orchestrator | Saturday 12 July 2025 13:39:25 +0000 (0:00:00.196) 0:00:32.287 ********* 2025-07-12 13:39:32.843931 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:32.843942 | orchestrator | 2025-07-12 13:39:32.843953 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:32.843963 | orchestrator | Saturday 12 July 2025 13:39:25 +0000 (0:00:00.194) 0:00:32.482 ********* 2025-07-12 13:39:32.843974 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:32.843985 | orchestrator | 2025-07-12 13:39:32.843997 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:32.844010 | orchestrator | Saturday 12 July 2025 13:39:25 +0000 (0:00:00.205) 0:00:32.687 ********* 2025-07-12 13:39:32.844022 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:32.844033 | orchestrator | 2025-07-12 13:39:32.844045 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:32.844057 | orchestrator | Saturday 12 July 2025 13:39:25 +0000 (0:00:00.204) 0:00:32.891 ********* 2025-07-12 13:39:32.844069 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:32.844081 | orchestrator | 2025-07-12 13:39:32.844093 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:32.844129 | orchestrator | Saturday 12 July 2025 13:39:26 +0000 (0:00:00.207) 0:00:33.099 ********* 2025-07-12 13:39:32.844142 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968) 2025-07-12 13:39:32.844155 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968) 2025-07-12 13:39:32.844167 | orchestrator | 2025-07-12 13:39:32.844179 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:32.844192 | orchestrator | Saturday 12 July 2025 13:39:26 +0000 (0:00:00.608) 0:00:33.707 ********* 2025-07-12 13:39:32.844204 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_375d9971-3091-4ee7-ad22-0f2ee4316c51) 2025-07-12 13:39:32.844216 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_375d9971-3091-4ee7-ad22-0f2ee4316c51) 2025-07-12 13:39:32.844228 | orchestrator | 2025-07-12 13:39:32.844239 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:32.844251 | orchestrator | Saturday 12 July 2025 13:39:27 +0000 (0:00:00.819) 0:00:34.526 ********* 2025-07-12 13:39:32.844263 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_80678cc2-85df-4096-9cf9-3a4ced065123) 2025-07-12 13:39:32.844276 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_80678cc2-85df-4096-9cf9-3a4ced065123) 2025-07-12 13:39:32.844288 | orchestrator | 2025-07-12 13:39:32.844300 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:32.844312 | orchestrator | Saturday 12 July 2025 13:39:27 +0000 (0:00:00.439) 0:00:34.965 ********* 2025-07-12 13:39:32.844324 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1e2296ca-3498-48cf-a25a-293306b54174) 2025-07-12 13:39:32.844336 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1e2296ca-3498-48cf-a25a-293306b54174) 2025-07-12 13:39:32.844348 | orchestrator | 2025-07-12 13:39:32.844359 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:32.844370 | orchestrator | Saturday 12 July 2025 13:39:28 +0000 (0:00:00.417) 0:00:35.383 ********* 2025-07-12 13:39:32.844381 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-12 13:39:32.844391 | orchestrator | 2025-07-12 13:39:32.844401 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:32.844412 | orchestrator | Saturday 12 July 2025 13:39:28 +0000 (0:00:00.363) 0:00:35.746 ********* 2025-07-12 13:39:32.844449 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-07-12 13:39:32.844472 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-07-12 13:39:32.844492 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-07-12 13:39:32.844504 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-07-12 13:39:32.844514 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-07-12 13:39:32.844525 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-07-12 13:39:32.844535 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-07-12 13:39:32.844546 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-07-12 13:39:32.844556 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-07-12 13:39:32.844566 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-07-12 13:39:32.844577 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-07-12 13:39:32.844587 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-07-12 13:39:32.844597 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-07-12 13:39:32.844616 | orchestrator | 2025-07-12 13:39:32.844627 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:32.844638 | orchestrator | Saturday 12 July 2025 13:39:29 +0000 (0:00:00.404) 0:00:36.151 ********* 2025-07-12 13:39:32.844648 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:32.844658 | orchestrator | 2025-07-12 13:39:32.844669 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:32.844680 | orchestrator | Saturday 12 July 2025 13:39:29 +0000 (0:00:00.240) 0:00:36.391 ********* 2025-07-12 13:39:32.844690 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:32.844700 | orchestrator | 2025-07-12 13:39:32.844739 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:32.844750 | orchestrator | Saturday 12 July 2025 13:39:29 +0000 (0:00:00.218) 0:00:36.610 ********* 2025-07-12 13:39:32.844761 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:32.844771 | orchestrator | 2025-07-12 13:39:32.844781 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:32.844792 | orchestrator | Saturday 12 July 2025 13:39:29 +0000 (0:00:00.205) 0:00:36.815 ********* 2025-07-12 13:39:32.844803 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:32.844813 | orchestrator | 2025-07-12 13:39:32.844824 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:32.844834 | orchestrator | Saturday 12 July 2025 13:39:30 +0000 (0:00:00.212) 0:00:37.027 ********* 2025-07-12 13:39:32.844845 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:32.844855 | orchestrator | 2025-07-12 13:39:32.844872 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:32.844883 | orchestrator | Saturday 12 July 2025 13:39:30 +0000 (0:00:00.206) 0:00:37.234 ********* 2025-07-12 13:39:32.844893 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:32.844904 | orchestrator | 2025-07-12 13:39:32.844914 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:32.844925 | orchestrator | Saturday 12 July 2025 13:39:30 +0000 (0:00:00.679) 0:00:37.913 ********* 2025-07-12 13:39:32.844935 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:32.844946 | orchestrator | 2025-07-12 13:39:32.844956 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:32.844967 | orchestrator | Saturday 12 July 2025 13:39:31 +0000 (0:00:00.215) 0:00:38.129 ********* 2025-07-12 13:39:32.844977 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:32.844988 | orchestrator | 2025-07-12 13:39:32.844998 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:32.845009 | orchestrator | Saturday 12 July 2025 13:39:31 +0000 (0:00:00.225) 0:00:38.354 ********* 2025-07-12 13:39:32.845019 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-07-12 13:39:32.845095 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-07-12 13:39:32.845106 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-07-12 13:39:32.845117 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-07-12 13:39:32.845127 | orchestrator | 2025-07-12 13:39:32.845138 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:32.845149 | orchestrator | Saturday 12 July 2025 13:39:31 +0000 (0:00:00.646) 0:00:39.001 ********* 2025-07-12 13:39:32.845159 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:32.845170 | orchestrator | 2025-07-12 13:39:32.845180 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:32.845191 | orchestrator | Saturday 12 July 2025 13:39:32 +0000 (0:00:00.211) 0:00:39.212 ********* 2025-07-12 13:39:32.845207 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:32.845217 | orchestrator | 2025-07-12 13:39:32.845228 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:32.845238 | orchestrator | Saturday 12 July 2025 13:39:32 +0000 (0:00:00.213) 0:00:39.426 ********* 2025-07-12 13:39:32.845249 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:32.845268 | orchestrator | 2025-07-12 13:39:32.845279 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:32.845289 | orchestrator | Saturday 12 July 2025 13:39:32 +0000 (0:00:00.198) 0:00:39.624 ********* 2025-07-12 13:39:32.845300 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:32.845310 | orchestrator | 2025-07-12 13:39:32.845321 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-07-12 13:39:32.845340 | orchestrator | Saturday 12 July 2025 13:39:32 +0000 (0:00:00.210) 0:00:39.835 ********* 2025-07-12 13:39:37.169652 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-07-12 13:39:37.169826 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-07-12 13:39:37.169846 | orchestrator | 2025-07-12 13:39:37.169860 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-07-12 13:39:37.169871 | orchestrator | Saturday 12 July 2025 13:39:33 +0000 (0:00:00.185) 0:00:40.021 ********* 2025-07-12 13:39:37.169882 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:37.169893 | orchestrator | 2025-07-12 13:39:37.169905 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-07-12 13:39:37.169915 | orchestrator | Saturday 12 July 2025 13:39:33 +0000 (0:00:00.134) 0:00:40.155 ********* 2025-07-12 13:39:37.169926 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:37.169937 | orchestrator | 2025-07-12 13:39:37.169947 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-07-12 13:39:37.169958 | orchestrator | Saturday 12 July 2025 13:39:33 +0000 (0:00:00.148) 0:00:40.304 ********* 2025-07-12 13:39:37.169969 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:37.169979 | orchestrator | 2025-07-12 13:39:37.169990 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-07-12 13:39:37.170000 | orchestrator | Saturday 12 July 2025 13:39:33 +0000 (0:00:00.140) 0:00:40.444 ********* 2025-07-12 13:39:37.170011 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:39:37.170076 | orchestrator | 2025-07-12 13:39:37.170088 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-07-12 13:39:37.170098 | orchestrator | Saturday 12 July 2025 13:39:33 +0000 (0:00:00.330) 0:00:40.775 ********* 2025-07-12 13:39:37.170109 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '76cf46ce-80cb-5d18-8384-c0838affc5b6'}}) 2025-07-12 13:39:37.170121 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '465622e3-903d-5505-a41f-76599f0f3897'}}) 2025-07-12 13:39:37.170131 | orchestrator | 2025-07-12 13:39:37.170142 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-07-12 13:39:37.170153 | orchestrator | Saturday 12 July 2025 13:39:33 +0000 (0:00:00.187) 0:00:40.962 ********* 2025-07-12 13:39:37.170164 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '76cf46ce-80cb-5d18-8384-c0838affc5b6'}})  2025-07-12 13:39:37.170176 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '465622e3-903d-5505-a41f-76599f0f3897'}})  2025-07-12 13:39:37.170187 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:37.170198 | orchestrator | 2025-07-12 13:39:37.170208 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-07-12 13:39:37.170220 | orchestrator | Saturday 12 July 2025 13:39:34 +0000 (0:00:00.149) 0:00:41.111 ********* 2025-07-12 13:39:37.170231 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '76cf46ce-80cb-5d18-8384-c0838affc5b6'}})  2025-07-12 13:39:37.170242 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '465622e3-903d-5505-a41f-76599f0f3897'}})  2025-07-12 13:39:37.170253 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:37.170263 | orchestrator | 2025-07-12 13:39:37.170274 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-07-12 13:39:37.170285 | orchestrator | Saturday 12 July 2025 13:39:34 +0000 (0:00:00.157) 0:00:41.269 ********* 2025-07-12 13:39:37.170321 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '76cf46ce-80cb-5d18-8384-c0838affc5b6'}})  2025-07-12 13:39:37.170333 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '465622e3-903d-5505-a41f-76599f0f3897'}})  2025-07-12 13:39:37.170344 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:37.170354 | orchestrator | 2025-07-12 13:39:37.170365 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-07-12 13:39:37.170375 | orchestrator | Saturday 12 July 2025 13:39:34 +0000 (0:00:00.171) 0:00:41.441 ********* 2025-07-12 13:39:37.170386 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:39:37.170396 | orchestrator | 2025-07-12 13:39:37.170407 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-07-12 13:39:37.170417 | orchestrator | Saturday 12 July 2025 13:39:34 +0000 (0:00:00.129) 0:00:41.571 ********* 2025-07-12 13:39:37.170428 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:39:37.170438 | orchestrator | 2025-07-12 13:39:37.170449 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-07-12 13:39:37.170459 | orchestrator | Saturday 12 July 2025 13:39:34 +0000 (0:00:00.146) 0:00:41.717 ********* 2025-07-12 13:39:37.170470 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:37.170480 | orchestrator | 2025-07-12 13:39:37.170491 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-07-12 13:39:37.170501 | orchestrator | Saturday 12 July 2025 13:39:34 +0000 (0:00:00.139) 0:00:41.857 ********* 2025-07-12 13:39:37.170512 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:37.170522 | orchestrator | 2025-07-12 13:39:37.170533 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-07-12 13:39:37.170544 | orchestrator | Saturday 12 July 2025 13:39:34 +0000 (0:00:00.131) 0:00:41.988 ********* 2025-07-12 13:39:37.170554 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:37.170565 | orchestrator | 2025-07-12 13:39:37.170575 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-07-12 13:39:37.170586 | orchestrator | Saturday 12 July 2025 13:39:35 +0000 (0:00:00.148) 0:00:42.137 ********* 2025-07-12 13:39:37.170597 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 13:39:37.170608 | orchestrator |  "ceph_osd_devices": { 2025-07-12 13:39:37.170618 | orchestrator |  "sdb": { 2025-07-12 13:39:37.170630 | orchestrator |  "osd_lvm_uuid": "76cf46ce-80cb-5d18-8384-c0838affc5b6" 2025-07-12 13:39:37.170664 | orchestrator |  }, 2025-07-12 13:39:37.170677 | orchestrator |  "sdc": { 2025-07-12 13:39:37.170687 | orchestrator |  "osd_lvm_uuid": "465622e3-903d-5505-a41f-76599f0f3897" 2025-07-12 13:39:37.170698 | orchestrator |  } 2025-07-12 13:39:37.170729 | orchestrator |  } 2025-07-12 13:39:37.170741 | orchestrator | } 2025-07-12 13:39:37.170752 | orchestrator | 2025-07-12 13:39:37.170762 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-07-12 13:39:37.170773 | orchestrator | Saturday 12 July 2025 13:39:35 +0000 (0:00:00.151) 0:00:42.288 ********* 2025-07-12 13:39:37.170783 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:37.170794 | orchestrator | 2025-07-12 13:39:37.170804 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-07-12 13:39:37.170815 | orchestrator | Saturday 12 July 2025 13:39:35 +0000 (0:00:00.135) 0:00:42.423 ********* 2025-07-12 13:39:37.170825 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:37.170836 | orchestrator | 2025-07-12 13:39:37.170847 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-07-12 13:39:37.170874 | orchestrator | Saturday 12 July 2025 13:39:35 +0000 (0:00:00.348) 0:00:42.772 ********* 2025-07-12 13:39:37.170885 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:37.170895 | orchestrator | 2025-07-12 13:39:37.170906 | orchestrator | TASK [Print configuration data] ************************************************ 2025-07-12 13:39:37.170917 | orchestrator | Saturday 12 July 2025 13:39:35 +0000 (0:00:00.142) 0:00:42.915 ********* 2025-07-12 13:39:37.170937 | orchestrator | changed: [testbed-node-5] => { 2025-07-12 13:39:37.170948 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-07-12 13:39:37.170959 | orchestrator |  "ceph_osd_devices": { 2025-07-12 13:39:37.170969 | orchestrator |  "sdb": { 2025-07-12 13:39:37.170980 | orchestrator |  "osd_lvm_uuid": "76cf46ce-80cb-5d18-8384-c0838affc5b6" 2025-07-12 13:39:37.170991 | orchestrator |  }, 2025-07-12 13:39:37.171001 | orchestrator |  "sdc": { 2025-07-12 13:39:37.171012 | orchestrator |  "osd_lvm_uuid": "465622e3-903d-5505-a41f-76599f0f3897" 2025-07-12 13:39:37.171022 | orchestrator |  } 2025-07-12 13:39:37.171033 | orchestrator |  }, 2025-07-12 13:39:37.171043 | orchestrator |  "lvm_volumes": [ 2025-07-12 13:39:37.171053 | orchestrator |  { 2025-07-12 13:39:37.171064 | orchestrator |  "data": "osd-block-76cf46ce-80cb-5d18-8384-c0838affc5b6", 2025-07-12 13:39:37.171075 | orchestrator |  "data_vg": "ceph-76cf46ce-80cb-5d18-8384-c0838affc5b6" 2025-07-12 13:39:37.171086 | orchestrator |  }, 2025-07-12 13:39:37.171096 | orchestrator |  { 2025-07-12 13:39:37.171107 | orchestrator |  "data": "osd-block-465622e3-903d-5505-a41f-76599f0f3897", 2025-07-12 13:39:37.171118 | orchestrator |  "data_vg": "ceph-465622e3-903d-5505-a41f-76599f0f3897" 2025-07-12 13:39:37.171128 | orchestrator |  } 2025-07-12 13:39:37.171139 | orchestrator |  ] 2025-07-12 13:39:37.171150 | orchestrator |  } 2025-07-12 13:39:37.171160 | orchestrator | } 2025-07-12 13:39:37.171171 | orchestrator | 2025-07-12 13:39:37.171182 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-07-12 13:39:37.171192 | orchestrator | Saturday 12 July 2025 13:39:36 +0000 (0:00:00.221) 0:00:43.136 ********* 2025-07-12 13:39:37.171203 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-07-12 13:39:37.171214 | orchestrator | 2025-07-12 13:39:37.171224 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:39:37.171235 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-12 13:39:37.171247 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-12 13:39:37.171258 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-12 13:39:37.171269 | orchestrator | 2025-07-12 13:39:37.171279 | orchestrator | 2025-07-12 13:39:37.171290 | orchestrator | 2025-07-12 13:39:37.171300 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:39:37.171311 | orchestrator | Saturday 12 July 2025 13:39:37 +0000 (0:00:01.011) 0:00:44.148 ********* 2025-07-12 13:39:37.171321 | orchestrator | =============================================================================== 2025-07-12 13:39:37.171331 | orchestrator | Write configuration file ------------------------------------------------ 4.55s 2025-07-12 13:39:37.171342 | orchestrator | Add known partitions to the list of available block devices ------------- 1.27s 2025-07-12 13:39:37.171352 | orchestrator | Add known partitions to the list of available block devices ------------- 1.20s 2025-07-12 13:39:37.171363 | orchestrator | Add known links to the list of available block devices ------------------ 1.18s 2025-07-12 13:39:37.171373 | orchestrator | Get initial list of available block devices ----------------------------- 1.16s 2025-07-12 13:39:37.171384 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.05s 2025-07-12 13:39:37.171394 | orchestrator | Add known links to the list of available block devices ------------------ 0.82s 2025-07-12 13:39:37.171409 | orchestrator | Add known links to the list of available block devices ------------------ 0.80s 2025-07-12 13:39:37.171420 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2025-07-12 13:39:37.171430 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2025-07-12 13:39:37.171447 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.71s 2025-07-12 13:39:37.171458 | orchestrator | Print configuration data ------------------------------------------------ 0.70s 2025-07-12 13:39:37.171468 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.69s 2025-07-12 13:39:37.171479 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2025-07-12 13:39:37.171497 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2025-07-12 13:39:37.494150 | orchestrator | Print DB devices -------------------------------------------------------- 0.65s 2025-07-12 13:39:37.494261 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-07-12 13:39:37.494275 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-07-12 13:39:37.494292 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.64s 2025-07-12 13:39:37.494311 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-07-12 13:39:59.795627 | orchestrator | 2025-07-12 13:39:59 | INFO  | Task 2e1ebb0e-f141-418b-81ec-2fc9dbfb6e9e (sync inventory) is running in background. Output coming soon. 2025-07-12 13:40:19.293215 | orchestrator | 2025-07-12 13:40:01 | INFO  | Starting group_vars file reorganization 2025-07-12 13:40:19.293338 | orchestrator | 2025-07-12 13:40:01 | INFO  | Moved 0 file(s) to their respective directories 2025-07-12 13:40:19.293357 | orchestrator | 2025-07-12 13:40:01 | INFO  | Group_vars file reorganization completed 2025-07-12 13:40:19.293370 | orchestrator | 2025-07-12 13:40:03 | INFO  | Starting variable preparation from inventory 2025-07-12 13:40:19.293381 | orchestrator | 2025-07-12 13:40:04 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-07-12 13:40:19.293392 | orchestrator | 2025-07-12 13:40:04 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-07-12 13:40:19.293403 | orchestrator | 2025-07-12 13:40:04 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-07-12 13:40:19.293414 | orchestrator | 2025-07-12 13:40:04 | INFO  | 3 file(s) written, 6 host(s) processed 2025-07-12 13:40:19.293426 | orchestrator | 2025-07-12 13:40:04 | INFO  | Variable preparation completed 2025-07-12 13:40:19.293436 | orchestrator | 2025-07-12 13:40:05 | INFO  | Starting inventory overwrite handling 2025-07-12 13:40:19.293447 | orchestrator | 2025-07-12 13:40:05 | INFO  | Handling group overwrites in 99-overwrite 2025-07-12 13:40:19.293458 | orchestrator | 2025-07-12 13:40:05 | INFO  | Removing group frr:children from 60-generic 2025-07-12 13:40:19.293469 | orchestrator | 2025-07-12 13:40:05 | INFO  | Removing group storage:children from 50-kolla 2025-07-12 13:40:19.293480 | orchestrator | 2025-07-12 13:40:05 | INFO  | Removing group netbird:children from 50-infrastruture 2025-07-12 13:40:19.293491 | orchestrator | 2025-07-12 13:40:05 | INFO  | Removing group ceph-rgw from 50-ceph 2025-07-12 13:40:19.293502 | orchestrator | 2025-07-12 13:40:05 | INFO  | Removing group ceph-mds from 50-ceph 2025-07-12 13:40:19.293513 | orchestrator | 2025-07-12 13:40:05 | INFO  | Handling group overwrites in 20-roles 2025-07-12 13:40:19.293524 | orchestrator | 2025-07-12 13:40:05 | INFO  | Removing group k3s_node from 50-infrastruture 2025-07-12 13:40:19.293535 | orchestrator | 2025-07-12 13:40:05 | INFO  | Removed 6 group(s) in total 2025-07-12 13:40:19.293546 | orchestrator | 2025-07-12 13:40:05 | INFO  | Inventory overwrite handling completed 2025-07-12 13:40:19.293557 | orchestrator | 2025-07-12 13:40:06 | INFO  | Starting merge of inventory files 2025-07-12 13:40:19.293594 | orchestrator | 2025-07-12 13:40:06 | INFO  | Inventory files merged successfully 2025-07-12 13:40:19.293606 | orchestrator | 2025-07-12 13:40:10 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-07-12 13:40:19.293617 | orchestrator | 2025-07-12 13:40:18 | INFO  | Successfully wrote ClusterShell configuration 2025-07-12 13:40:19.293628 | orchestrator | [master 0c77d60] 2025-07-12-13-40 2025-07-12 13:40:19.293640 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-07-12 13:40:21.467253 | orchestrator | 2025-07-12 13:40:21 | INFO  | Task c1c167e6-8305-4d0f-ab73-72edb634600d (ceph-create-lvm-devices) was prepared for execution. 2025-07-12 13:40:21.467351 | orchestrator | 2025-07-12 13:40:21 | INFO  | It takes a moment until task c1c167e6-8305-4d0f-ab73-72edb634600d (ceph-create-lvm-devices) has been started and output is visible here. 2025-07-12 13:40:33.268118 | orchestrator | 2025-07-12 13:40:33.268265 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-07-12 13:40:33.268292 | orchestrator | 2025-07-12 13:40:33.268314 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 13:40:33.268335 | orchestrator | Saturday 12 July 2025 13:40:25 +0000 (0:00:00.322) 0:00:00.322 ********* 2025-07-12 13:40:33.268349 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 13:40:33.268361 | orchestrator | 2025-07-12 13:40:33.268372 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-12 13:40:33.268382 | orchestrator | Saturday 12 July 2025 13:40:25 +0000 (0:00:00.287) 0:00:00.610 ********* 2025-07-12 13:40:33.268393 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:40:33.268405 | orchestrator | 2025-07-12 13:40:33.268416 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:33.268452 | orchestrator | Saturday 12 July 2025 13:40:26 +0000 (0:00:00.236) 0:00:00.847 ********* 2025-07-12 13:40:33.268463 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-07-12 13:40:33.268475 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-07-12 13:40:33.268486 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-07-12 13:40:33.268496 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-07-12 13:40:33.268507 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-07-12 13:40:33.268517 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-07-12 13:40:33.268528 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-07-12 13:40:33.268538 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-07-12 13:40:33.268549 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-07-12 13:40:33.268560 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-07-12 13:40:33.268570 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-07-12 13:40:33.268582 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-07-12 13:40:33.268592 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-07-12 13:40:33.268603 | orchestrator | 2025-07-12 13:40:33.268613 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:33.268624 | orchestrator | Saturday 12 July 2025 13:40:26 +0000 (0:00:00.406) 0:00:01.254 ********* 2025-07-12 13:40:33.268635 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:33.268686 | orchestrator | 2025-07-12 13:40:33.268699 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:33.268710 | orchestrator | Saturday 12 July 2025 13:40:27 +0000 (0:00:00.455) 0:00:01.710 ********* 2025-07-12 13:40:33.268743 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:33.268754 | orchestrator | 2025-07-12 13:40:33.268765 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:33.268775 | orchestrator | Saturday 12 July 2025 13:40:27 +0000 (0:00:00.202) 0:00:01.913 ********* 2025-07-12 13:40:33.268786 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:33.268796 | orchestrator | 2025-07-12 13:40:33.268807 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:33.268818 | orchestrator | Saturday 12 July 2025 13:40:27 +0000 (0:00:00.212) 0:00:02.126 ********* 2025-07-12 13:40:33.268828 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:33.268839 | orchestrator | 2025-07-12 13:40:33.268849 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:33.268860 | orchestrator | Saturday 12 July 2025 13:40:27 +0000 (0:00:00.179) 0:00:02.305 ********* 2025-07-12 13:40:33.268870 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:33.268880 | orchestrator | 2025-07-12 13:40:33.268891 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:33.268901 | orchestrator | Saturday 12 July 2025 13:40:27 +0000 (0:00:00.207) 0:00:02.513 ********* 2025-07-12 13:40:33.268912 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:33.268922 | orchestrator | 2025-07-12 13:40:33.268932 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:33.268943 | orchestrator | Saturday 12 July 2025 13:40:28 +0000 (0:00:00.195) 0:00:02.708 ********* 2025-07-12 13:40:33.268953 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:33.268963 | orchestrator | 2025-07-12 13:40:33.268974 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:33.268985 | orchestrator | Saturday 12 July 2025 13:40:28 +0000 (0:00:00.210) 0:00:02.918 ********* 2025-07-12 13:40:33.268996 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:33.269007 | orchestrator | 2025-07-12 13:40:33.269017 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:33.269028 | orchestrator | Saturday 12 July 2025 13:40:28 +0000 (0:00:00.188) 0:00:03.107 ********* 2025-07-12 13:40:33.269039 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70) 2025-07-12 13:40:33.269050 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70) 2025-07-12 13:40:33.269061 | orchestrator | 2025-07-12 13:40:33.269071 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:33.269082 | orchestrator | Saturday 12 July 2025 13:40:28 +0000 (0:00:00.407) 0:00:03.515 ********* 2025-07-12 13:40:33.269114 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_cf6824d0-2336-4864-a32f-bffef7606523) 2025-07-12 13:40:33.269131 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_cf6824d0-2336-4864-a32f-bffef7606523) 2025-07-12 13:40:33.269142 | orchestrator | 2025-07-12 13:40:33.269153 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:33.269163 | orchestrator | Saturday 12 July 2025 13:40:29 +0000 (0:00:00.405) 0:00:03.920 ********* 2025-07-12 13:40:33.269174 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_bad1a367-9870-4c1b-af18-4999b26662c8) 2025-07-12 13:40:33.269185 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_bad1a367-9870-4c1b-af18-4999b26662c8) 2025-07-12 13:40:33.269195 | orchestrator | 2025-07-12 13:40:33.269206 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:33.269216 | orchestrator | Saturday 12 July 2025 13:40:29 +0000 (0:00:00.602) 0:00:04.522 ********* 2025-07-12 13:40:33.269227 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ec46bf14-c827-46d0-9a8c-19525aeacad6) 2025-07-12 13:40:33.269237 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ec46bf14-c827-46d0-9a8c-19525aeacad6) 2025-07-12 13:40:33.269257 | orchestrator | 2025-07-12 13:40:33.269267 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:33.269278 | orchestrator | Saturday 12 July 2025 13:40:30 +0000 (0:00:00.640) 0:00:05.163 ********* 2025-07-12 13:40:33.269296 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-12 13:40:33.269317 | orchestrator | 2025-07-12 13:40:33.269339 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:40:33.269363 | orchestrator | Saturday 12 July 2025 13:40:31 +0000 (0:00:00.732) 0:00:05.896 ********* 2025-07-12 13:40:33.269384 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-07-12 13:40:33.269395 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-07-12 13:40:33.269406 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-07-12 13:40:33.269417 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-07-12 13:40:33.269427 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-07-12 13:40:33.269437 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-07-12 13:40:33.269447 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-07-12 13:40:33.269458 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-07-12 13:40:33.269468 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-07-12 13:40:33.269479 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-07-12 13:40:33.269489 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-07-12 13:40:33.269499 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-07-12 13:40:33.269509 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-07-12 13:40:33.269520 | orchestrator | 2025-07-12 13:40:33.269530 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:40:33.269541 | orchestrator | Saturday 12 July 2025 13:40:31 +0000 (0:00:00.441) 0:00:06.338 ********* 2025-07-12 13:40:33.269551 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:33.269562 | orchestrator | 2025-07-12 13:40:33.269572 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:40:33.269583 | orchestrator | Saturday 12 July 2025 13:40:31 +0000 (0:00:00.198) 0:00:06.537 ********* 2025-07-12 13:40:33.269593 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:33.269604 | orchestrator | 2025-07-12 13:40:33.269614 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:40:33.269624 | orchestrator | Saturday 12 July 2025 13:40:32 +0000 (0:00:00.200) 0:00:06.737 ********* 2025-07-12 13:40:33.269635 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:33.269668 | orchestrator | 2025-07-12 13:40:33.269681 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:40:33.269692 | orchestrator | Saturday 12 July 2025 13:40:32 +0000 (0:00:00.199) 0:00:06.936 ********* 2025-07-12 13:40:33.269702 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:33.269713 | orchestrator | 2025-07-12 13:40:33.269723 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:40:33.269734 | orchestrator | Saturday 12 July 2025 13:40:32 +0000 (0:00:00.200) 0:00:07.136 ********* 2025-07-12 13:40:33.269745 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:33.269755 | orchestrator | 2025-07-12 13:40:33.269766 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:40:33.269776 | orchestrator | Saturday 12 July 2025 13:40:32 +0000 (0:00:00.196) 0:00:07.333 ********* 2025-07-12 13:40:33.269787 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:33.269809 | orchestrator | 2025-07-12 13:40:33.269819 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:40:33.269830 | orchestrator | Saturday 12 July 2025 13:40:32 +0000 (0:00:00.200) 0:00:07.534 ********* 2025-07-12 13:40:33.269841 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:33.269851 | orchestrator | 2025-07-12 13:40:33.269862 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:40:33.269872 | orchestrator | Saturday 12 July 2025 13:40:33 +0000 (0:00:00.208) 0:00:07.742 ********* 2025-07-12 13:40:33.269891 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:41.062487 | orchestrator | 2025-07-12 13:40:41.062707 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:40:41.062729 | orchestrator | Saturday 12 July 2025 13:40:33 +0000 (0:00:00.192) 0:00:07.935 ********* 2025-07-12 13:40:41.062742 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-07-12 13:40:41.062754 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-07-12 13:40:41.062765 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-07-12 13:40:41.062776 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-07-12 13:40:41.062787 | orchestrator | 2025-07-12 13:40:41.062801 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:40:41.062814 | orchestrator | Saturday 12 July 2025 13:40:34 +0000 (0:00:01.069) 0:00:09.004 ********* 2025-07-12 13:40:41.062828 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:41.062841 | orchestrator | 2025-07-12 13:40:41.062853 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:40:41.062866 | orchestrator | Saturday 12 July 2025 13:40:34 +0000 (0:00:00.203) 0:00:09.208 ********* 2025-07-12 13:40:41.062878 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:41.062890 | orchestrator | 2025-07-12 13:40:41.062903 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:40:41.062935 | orchestrator | Saturday 12 July 2025 13:40:34 +0000 (0:00:00.215) 0:00:09.423 ********* 2025-07-12 13:40:41.062947 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:41.062957 | orchestrator | 2025-07-12 13:40:41.062968 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:40:41.062979 | orchestrator | Saturday 12 July 2025 13:40:34 +0000 (0:00:00.203) 0:00:09.626 ********* 2025-07-12 13:40:41.062990 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:41.063001 | orchestrator | 2025-07-12 13:40:41.063011 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-07-12 13:40:41.063022 | orchestrator | Saturday 12 July 2025 13:40:35 +0000 (0:00:00.196) 0:00:09.823 ********* 2025-07-12 13:40:41.063033 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:41.063043 | orchestrator | 2025-07-12 13:40:41.063054 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-07-12 13:40:41.063065 | orchestrator | Saturday 12 July 2025 13:40:35 +0000 (0:00:00.142) 0:00:09.966 ********* 2025-07-12 13:40:41.063091 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f86cb3d6-0e78-5b6a-8369-843476bf59dc'}}) 2025-07-12 13:40:41.063103 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a'}}) 2025-07-12 13:40:41.063114 | orchestrator | 2025-07-12 13:40:41.063124 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-07-12 13:40:41.063135 | orchestrator | Saturday 12 July 2025 13:40:35 +0000 (0:00:00.250) 0:00:10.217 ********* 2025-07-12 13:40:41.063147 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f86cb3d6-0e78-5b6a-8369-843476bf59dc', 'data_vg': 'ceph-f86cb3d6-0e78-5b6a-8369-843476bf59dc'}) 2025-07-12 13:40:41.063170 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a', 'data_vg': 'ceph-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a'}) 2025-07-12 13:40:41.063181 | orchestrator | 2025-07-12 13:40:41.063192 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-07-12 13:40:41.063223 | orchestrator | Saturday 12 July 2025 13:40:37 +0000 (0:00:02.001) 0:00:12.218 ********* 2025-07-12 13:40:41.063235 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f86cb3d6-0e78-5b6a-8369-843476bf59dc', 'data_vg': 'ceph-f86cb3d6-0e78-5b6a-8369-843476bf59dc'})  2025-07-12 13:40:41.063247 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a', 'data_vg': 'ceph-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a'})  2025-07-12 13:40:41.063257 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:41.063268 | orchestrator | 2025-07-12 13:40:41.063279 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-07-12 13:40:41.063289 | orchestrator | Saturday 12 July 2025 13:40:37 +0000 (0:00:00.148) 0:00:12.367 ********* 2025-07-12 13:40:41.063300 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f86cb3d6-0e78-5b6a-8369-843476bf59dc', 'data_vg': 'ceph-f86cb3d6-0e78-5b6a-8369-843476bf59dc'}) 2025-07-12 13:40:41.063311 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a', 'data_vg': 'ceph-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a'}) 2025-07-12 13:40:41.063322 | orchestrator | 2025-07-12 13:40:41.063332 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-07-12 13:40:41.063343 | orchestrator | Saturday 12 July 2025 13:40:39 +0000 (0:00:01.463) 0:00:13.831 ********* 2025-07-12 13:40:41.063353 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f86cb3d6-0e78-5b6a-8369-843476bf59dc', 'data_vg': 'ceph-f86cb3d6-0e78-5b6a-8369-843476bf59dc'})  2025-07-12 13:40:41.063364 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a', 'data_vg': 'ceph-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a'})  2025-07-12 13:40:41.063375 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:41.063385 | orchestrator | 2025-07-12 13:40:41.063396 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-07-12 13:40:41.063407 | orchestrator | Saturday 12 July 2025 13:40:39 +0000 (0:00:00.149) 0:00:13.980 ********* 2025-07-12 13:40:41.063417 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:41.063428 | orchestrator | 2025-07-12 13:40:41.063438 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-07-12 13:40:41.063473 | orchestrator | Saturday 12 July 2025 13:40:39 +0000 (0:00:00.123) 0:00:14.103 ********* 2025-07-12 13:40:41.063485 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f86cb3d6-0e78-5b6a-8369-843476bf59dc', 'data_vg': 'ceph-f86cb3d6-0e78-5b6a-8369-843476bf59dc'})  2025-07-12 13:40:41.063497 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a', 'data_vg': 'ceph-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a'})  2025-07-12 13:40:41.063507 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:41.063518 | orchestrator | 2025-07-12 13:40:41.063529 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-07-12 13:40:41.063539 | orchestrator | Saturday 12 July 2025 13:40:39 +0000 (0:00:00.269) 0:00:14.373 ********* 2025-07-12 13:40:41.063550 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:41.063561 | orchestrator | 2025-07-12 13:40:41.063571 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-07-12 13:40:41.063582 | orchestrator | Saturday 12 July 2025 13:40:39 +0000 (0:00:00.135) 0:00:14.508 ********* 2025-07-12 13:40:41.063593 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f86cb3d6-0e78-5b6a-8369-843476bf59dc', 'data_vg': 'ceph-f86cb3d6-0e78-5b6a-8369-843476bf59dc'})  2025-07-12 13:40:41.063603 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a', 'data_vg': 'ceph-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a'})  2025-07-12 13:40:41.063614 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:41.063624 | orchestrator | 2025-07-12 13:40:41.063653 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-07-12 13:40:41.063672 | orchestrator | Saturday 12 July 2025 13:40:39 +0000 (0:00:00.138) 0:00:14.647 ********* 2025-07-12 13:40:41.063683 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:41.063693 | orchestrator | 2025-07-12 13:40:41.063704 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-07-12 13:40:41.063715 | orchestrator | Saturday 12 July 2025 13:40:40 +0000 (0:00:00.123) 0:00:14.770 ********* 2025-07-12 13:40:41.063725 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f86cb3d6-0e78-5b6a-8369-843476bf59dc', 'data_vg': 'ceph-f86cb3d6-0e78-5b6a-8369-843476bf59dc'})  2025-07-12 13:40:41.063736 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a', 'data_vg': 'ceph-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a'})  2025-07-12 13:40:41.063747 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:41.063758 | orchestrator | 2025-07-12 13:40:41.063768 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-07-12 13:40:41.063779 | orchestrator | Saturday 12 July 2025 13:40:40 +0000 (0:00:00.139) 0:00:14.910 ********* 2025-07-12 13:40:41.063790 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:40:41.063801 | orchestrator | 2025-07-12 13:40:41.063811 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-07-12 13:40:41.063822 | orchestrator | Saturday 12 July 2025 13:40:40 +0000 (0:00:00.128) 0:00:15.038 ********* 2025-07-12 13:40:41.063833 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f86cb3d6-0e78-5b6a-8369-843476bf59dc', 'data_vg': 'ceph-f86cb3d6-0e78-5b6a-8369-843476bf59dc'})  2025-07-12 13:40:41.063844 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a', 'data_vg': 'ceph-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a'})  2025-07-12 13:40:41.063854 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:41.063865 | orchestrator | 2025-07-12 13:40:41.063876 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-07-12 13:40:41.063887 | orchestrator | Saturday 12 July 2025 13:40:40 +0000 (0:00:00.143) 0:00:15.182 ********* 2025-07-12 13:40:41.063897 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f86cb3d6-0e78-5b6a-8369-843476bf59dc', 'data_vg': 'ceph-f86cb3d6-0e78-5b6a-8369-843476bf59dc'})  2025-07-12 13:40:41.063908 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a', 'data_vg': 'ceph-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a'})  2025-07-12 13:40:41.063919 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:41.063930 | orchestrator | 2025-07-12 13:40:41.063940 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-07-12 13:40:41.063951 | orchestrator | Saturday 12 July 2025 13:40:40 +0000 (0:00:00.142) 0:00:15.324 ********* 2025-07-12 13:40:41.063962 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f86cb3d6-0e78-5b6a-8369-843476bf59dc', 'data_vg': 'ceph-f86cb3d6-0e78-5b6a-8369-843476bf59dc'})  2025-07-12 13:40:41.063972 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a', 'data_vg': 'ceph-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a'})  2025-07-12 13:40:41.063983 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:41.063994 | orchestrator | 2025-07-12 13:40:41.064004 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-07-12 13:40:41.064015 | orchestrator | Saturday 12 July 2025 13:40:40 +0000 (0:00:00.148) 0:00:15.473 ********* 2025-07-12 13:40:41.064025 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:41.064036 | orchestrator | 2025-07-12 13:40:41.064046 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-07-12 13:40:41.064057 | orchestrator | Saturday 12 July 2025 13:40:40 +0000 (0:00:00.131) 0:00:15.604 ********* 2025-07-12 13:40:41.064068 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:41.064079 | orchestrator | 2025-07-12 13:40:41.064096 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-07-12 13:40:47.320267 | orchestrator | Saturday 12 July 2025 13:40:41 +0000 (0:00:00.126) 0:00:15.731 ********* 2025-07-12 13:40:47.320417 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:47.320476 | orchestrator | 2025-07-12 13:40:47.320503 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-07-12 13:40:47.320525 | orchestrator | Saturday 12 July 2025 13:40:41 +0000 (0:00:00.126) 0:00:15.857 ********* 2025-07-12 13:40:47.320547 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 13:40:47.320567 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-07-12 13:40:47.320580 | orchestrator | } 2025-07-12 13:40:47.320591 | orchestrator | 2025-07-12 13:40:47.320602 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-07-12 13:40:47.320642 | orchestrator | Saturday 12 July 2025 13:40:41 +0000 (0:00:00.251) 0:00:16.108 ********* 2025-07-12 13:40:47.320655 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 13:40:47.320666 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-07-12 13:40:47.320677 | orchestrator | } 2025-07-12 13:40:47.320688 | orchestrator | 2025-07-12 13:40:47.320699 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-07-12 13:40:47.320716 | orchestrator | Saturday 12 July 2025 13:40:41 +0000 (0:00:00.138) 0:00:16.247 ********* 2025-07-12 13:40:47.320734 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 13:40:47.320753 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-07-12 13:40:47.320772 | orchestrator | } 2025-07-12 13:40:47.320785 | orchestrator | 2025-07-12 13:40:47.320798 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-07-12 13:40:47.320810 | orchestrator | Saturday 12 July 2025 13:40:41 +0000 (0:00:00.162) 0:00:16.409 ********* 2025-07-12 13:40:47.320823 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:40:47.320835 | orchestrator | 2025-07-12 13:40:47.320847 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-07-12 13:40:47.320866 | orchestrator | Saturday 12 July 2025 13:40:42 +0000 (0:00:00.631) 0:00:17.041 ********* 2025-07-12 13:40:47.320886 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:40:47.320905 | orchestrator | 2025-07-12 13:40:47.320924 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-07-12 13:40:47.320936 | orchestrator | Saturday 12 July 2025 13:40:42 +0000 (0:00:00.525) 0:00:17.567 ********* 2025-07-12 13:40:47.320947 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:40:47.320958 | orchestrator | 2025-07-12 13:40:47.320968 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-07-12 13:40:47.320979 | orchestrator | Saturday 12 July 2025 13:40:43 +0000 (0:00:00.517) 0:00:18.085 ********* 2025-07-12 13:40:47.320990 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:40:47.321001 | orchestrator | 2025-07-12 13:40:47.321012 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-07-12 13:40:47.321023 | orchestrator | Saturday 12 July 2025 13:40:43 +0000 (0:00:00.161) 0:00:18.246 ********* 2025-07-12 13:40:47.321034 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:47.321044 | orchestrator | 2025-07-12 13:40:47.321055 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-07-12 13:40:47.321066 | orchestrator | Saturday 12 July 2025 13:40:43 +0000 (0:00:00.125) 0:00:18.372 ********* 2025-07-12 13:40:47.321077 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:47.321087 | orchestrator | 2025-07-12 13:40:47.321122 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-07-12 13:40:47.321143 | orchestrator | Saturday 12 July 2025 13:40:43 +0000 (0:00:00.117) 0:00:18.490 ********* 2025-07-12 13:40:47.321161 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 13:40:47.321177 | orchestrator |  "vgs_report": { 2025-07-12 13:40:47.321195 | orchestrator |  "vg": [] 2025-07-12 13:40:47.321215 | orchestrator |  } 2025-07-12 13:40:47.321254 | orchestrator | } 2025-07-12 13:40:47.321268 | orchestrator | 2025-07-12 13:40:47.321279 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-07-12 13:40:47.321290 | orchestrator | Saturday 12 July 2025 13:40:43 +0000 (0:00:00.140) 0:00:18.630 ********* 2025-07-12 13:40:47.321323 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:47.321335 | orchestrator | 2025-07-12 13:40:47.321345 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-07-12 13:40:47.321356 | orchestrator | Saturday 12 July 2025 13:40:44 +0000 (0:00:00.145) 0:00:18.776 ********* 2025-07-12 13:40:47.321367 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:47.321377 | orchestrator | 2025-07-12 13:40:47.321388 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-07-12 13:40:47.321399 | orchestrator | Saturday 12 July 2025 13:40:44 +0000 (0:00:00.143) 0:00:18.920 ********* 2025-07-12 13:40:47.321410 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:47.321420 | orchestrator | 2025-07-12 13:40:47.321431 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-07-12 13:40:47.321442 | orchestrator | Saturday 12 July 2025 13:40:44 +0000 (0:00:00.338) 0:00:19.258 ********* 2025-07-12 13:40:47.321452 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:47.321463 | orchestrator | 2025-07-12 13:40:47.321473 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-07-12 13:40:47.321484 | orchestrator | Saturday 12 July 2025 13:40:44 +0000 (0:00:00.135) 0:00:19.393 ********* 2025-07-12 13:40:47.321494 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:47.321505 | orchestrator | 2025-07-12 13:40:47.321515 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-07-12 13:40:47.321526 | orchestrator | Saturday 12 July 2025 13:40:44 +0000 (0:00:00.149) 0:00:19.542 ********* 2025-07-12 13:40:47.321536 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:47.321547 | orchestrator | 2025-07-12 13:40:47.321557 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-07-12 13:40:47.321568 | orchestrator | Saturday 12 July 2025 13:40:45 +0000 (0:00:00.140) 0:00:19.683 ********* 2025-07-12 13:40:47.321578 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:47.321589 | orchestrator | 2025-07-12 13:40:47.321599 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-07-12 13:40:47.321631 | orchestrator | Saturday 12 July 2025 13:40:45 +0000 (0:00:00.159) 0:00:19.843 ********* 2025-07-12 13:40:47.321644 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:47.321655 | orchestrator | 2025-07-12 13:40:47.321677 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-07-12 13:40:47.321722 | orchestrator | Saturday 12 July 2025 13:40:45 +0000 (0:00:00.144) 0:00:19.988 ********* 2025-07-12 13:40:47.321742 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:47.321761 | orchestrator | 2025-07-12 13:40:47.321780 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-07-12 13:40:47.321796 | orchestrator | Saturday 12 July 2025 13:40:45 +0000 (0:00:00.139) 0:00:20.127 ********* 2025-07-12 13:40:47.321814 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:47.321833 | orchestrator | 2025-07-12 13:40:47.321851 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-07-12 13:40:47.321871 | orchestrator | Saturday 12 July 2025 13:40:45 +0000 (0:00:00.138) 0:00:20.266 ********* 2025-07-12 13:40:47.321889 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:47.321906 | orchestrator | 2025-07-12 13:40:47.321917 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-07-12 13:40:47.321928 | orchestrator | Saturday 12 July 2025 13:40:45 +0000 (0:00:00.135) 0:00:20.402 ********* 2025-07-12 13:40:47.321938 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:47.321949 | orchestrator | 2025-07-12 13:40:47.321959 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-07-12 13:40:47.321970 | orchestrator | Saturday 12 July 2025 13:40:45 +0000 (0:00:00.137) 0:00:20.539 ********* 2025-07-12 13:40:47.321980 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:47.321991 | orchestrator | 2025-07-12 13:40:47.322001 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-07-12 13:40:47.322012 | orchestrator | Saturday 12 July 2025 13:40:46 +0000 (0:00:00.146) 0:00:20.685 ********* 2025-07-12 13:40:47.322101 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:47.322113 | orchestrator | 2025-07-12 13:40:47.322123 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-07-12 13:40:47.322134 | orchestrator | Saturday 12 July 2025 13:40:46 +0000 (0:00:00.146) 0:00:20.831 ********* 2025-07-12 13:40:47.322146 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f86cb3d6-0e78-5b6a-8369-843476bf59dc', 'data_vg': 'ceph-f86cb3d6-0e78-5b6a-8369-843476bf59dc'})  2025-07-12 13:40:47.322158 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a', 'data_vg': 'ceph-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a'})  2025-07-12 13:40:47.322169 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:47.322180 | orchestrator | 2025-07-12 13:40:47.322191 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-07-12 13:40:47.322201 | orchestrator | Saturday 12 July 2025 13:40:46 +0000 (0:00:00.161) 0:00:20.992 ********* 2025-07-12 13:40:47.322212 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f86cb3d6-0e78-5b6a-8369-843476bf59dc', 'data_vg': 'ceph-f86cb3d6-0e78-5b6a-8369-843476bf59dc'})  2025-07-12 13:40:47.322223 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a', 'data_vg': 'ceph-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a'})  2025-07-12 13:40:47.322234 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:47.322244 | orchestrator | 2025-07-12 13:40:47.322254 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-07-12 13:40:47.322265 | orchestrator | Saturday 12 July 2025 13:40:46 +0000 (0:00:00.353) 0:00:21.346 ********* 2025-07-12 13:40:47.322276 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f86cb3d6-0e78-5b6a-8369-843476bf59dc', 'data_vg': 'ceph-f86cb3d6-0e78-5b6a-8369-843476bf59dc'})  2025-07-12 13:40:47.322286 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a', 'data_vg': 'ceph-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a'})  2025-07-12 13:40:47.322297 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:47.322308 | orchestrator | 2025-07-12 13:40:47.322318 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-07-12 13:40:47.322329 | orchestrator | Saturday 12 July 2025 13:40:46 +0000 (0:00:00.157) 0:00:21.503 ********* 2025-07-12 13:40:47.322339 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f86cb3d6-0e78-5b6a-8369-843476bf59dc', 'data_vg': 'ceph-f86cb3d6-0e78-5b6a-8369-843476bf59dc'})  2025-07-12 13:40:47.322350 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a', 'data_vg': 'ceph-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a'})  2025-07-12 13:40:47.322361 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:47.322371 | orchestrator | 2025-07-12 13:40:47.322381 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-07-12 13:40:47.322392 | orchestrator | Saturday 12 July 2025 13:40:46 +0000 (0:00:00.156) 0:00:21.660 ********* 2025-07-12 13:40:47.322402 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f86cb3d6-0e78-5b6a-8369-843476bf59dc', 'data_vg': 'ceph-f86cb3d6-0e78-5b6a-8369-843476bf59dc'})  2025-07-12 13:40:47.322413 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a', 'data_vg': 'ceph-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a'})  2025-07-12 13:40:47.322423 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:47.322434 | orchestrator | 2025-07-12 13:40:47.322444 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-07-12 13:40:47.322455 | orchestrator | Saturday 12 July 2025 13:40:47 +0000 (0:00:00.171) 0:00:21.832 ********* 2025-07-12 13:40:47.322471 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f86cb3d6-0e78-5b6a-8369-843476bf59dc', 'data_vg': 'ceph-f86cb3d6-0e78-5b6a-8369-843476bf59dc'})  2025-07-12 13:40:47.322552 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a', 'data_vg': 'ceph-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a'})  2025-07-12 13:40:52.613465 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:52.613575 | orchestrator | 2025-07-12 13:40:52.613590 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-07-12 13:40:52.613657 | orchestrator | Saturday 12 July 2025 13:40:47 +0000 (0:00:00.154) 0:00:21.987 ********* 2025-07-12 13:40:52.613670 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f86cb3d6-0e78-5b6a-8369-843476bf59dc', 'data_vg': 'ceph-f86cb3d6-0e78-5b6a-8369-843476bf59dc'})  2025-07-12 13:40:52.613683 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a', 'data_vg': 'ceph-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a'})  2025-07-12 13:40:52.613693 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:52.613704 | orchestrator | 2025-07-12 13:40:52.613716 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-07-12 13:40:52.613727 | orchestrator | Saturday 12 July 2025 13:40:47 +0000 (0:00:00.164) 0:00:22.151 ********* 2025-07-12 13:40:52.613738 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f86cb3d6-0e78-5b6a-8369-843476bf59dc', 'data_vg': 'ceph-f86cb3d6-0e78-5b6a-8369-843476bf59dc'})  2025-07-12 13:40:52.613749 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a', 'data_vg': 'ceph-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a'})  2025-07-12 13:40:52.613760 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:52.613770 | orchestrator | 2025-07-12 13:40:52.613781 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-07-12 13:40:52.613792 | orchestrator | Saturday 12 July 2025 13:40:47 +0000 (0:00:00.162) 0:00:22.314 ********* 2025-07-12 13:40:52.613802 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:40:52.613814 | orchestrator | 2025-07-12 13:40:52.613825 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-07-12 13:40:52.613836 | orchestrator | Saturday 12 July 2025 13:40:48 +0000 (0:00:00.513) 0:00:22.828 ********* 2025-07-12 13:40:52.613846 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:40:52.613857 | orchestrator | 2025-07-12 13:40:52.613868 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-07-12 13:40:52.613878 | orchestrator | Saturday 12 July 2025 13:40:48 +0000 (0:00:00.503) 0:00:23.331 ********* 2025-07-12 13:40:52.613913 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:40:52.613924 | orchestrator | 2025-07-12 13:40:52.613935 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-07-12 13:40:52.613946 | orchestrator | Saturday 12 July 2025 13:40:48 +0000 (0:00:00.153) 0:00:23.484 ********* 2025-07-12 13:40:52.613957 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a', 'vg_name': 'ceph-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a'}) 2025-07-12 13:40:52.613969 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-f86cb3d6-0e78-5b6a-8369-843476bf59dc', 'vg_name': 'ceph-f86cb3d6-0e78-5b6a-8369-843476bf59dc'}) 2025-07-12 13:40:52.613980 | orchestrator | 2025-07-12 13:40:52.613992 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-07-12 13:40:52.614004 | orchestrator | Saturday 12 July 2025 13:40:48 +0000 (0:00:00.170) 0:00:23.655 ********* 2025-07-12 13:40:52.614075 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f86cb3d6-0e78-5b6a-8369-843476bf59dc', 'data_vg': 'ceph-f86cb3d6-0e78-5b6a-8369-843476bf59dc'})  2025-07-12 13:40:52.614091 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a', 'data_vg': 'ceph-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a'})  2025-07-12 13:40:52.614103 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:52.614115 | orchestrator | 2025-07-12 13:40:52.614127 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-07-12 13:40:52.614139 | orchestrator | Saturday 12 July 2025 13:40:49 +0000 (0:00:00.150) 0:00:23.806 ********* 2025-07-12 13:40:52.614176 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f86cb3d6-0e78-5b6a-8369-843476bf59dc', 'data_vg': 'ceph-f86cb3d6-0e78-5b6a-8369-843476bf59dc'})  2025-07-12 13:40:52.614188 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a', 'data_vg': 'ceph-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a'})  2025-07-12 13:40:52.614200 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:52.614213 | orchestrator | 2025-07-12 13:40:52.614225 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-07-12 13:40:52.614237 | orchestrator | Saturday 12 July 2025 13:40:49 +0000 (0:00:00.360) 0:00:24.166 ********* 2025-07-12 13:40:52.614249 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f86cb3d6-0e78-5b6a-8369-843476bf59dc', 'data_vg': 'ceph-f86cb3d6-0e78-5b6a-8369-843476bf59dc'})  2025-07-12 13:40:52.614261 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a', 'data_vg': 'ceph-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a'})  2025-07-12 13:40:52.614273 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:52.614285 | orchestrator | 2025-07-12 13:40:52.614297 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-07-12 13:40:52.614309 | orchestrator | Saturday 12 July 2025 13:40:49 +0000 (0:00:00.164) 0:00:24.331 ********* 2025-07-12 13:40:52.614321 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 13:40:52.614334 | orchestrator |  "lvm_report": { 2025-07-12 13:40:52.614345 | orchestrator |  "lv": [ 2025-07-12 13:40:52.614356 | orchestrator |  { 2025-07-12 13:40:52.614384 | orchestrator |  "lv_name": "osd-block-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a", 2025-07-12 13:40:52.614396 | orchestrator |  "vg_name": "ceph-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a" 2025-07-12 13:40:52.614407 | orchestrator |  }, 2025-07-12 13:40:52.614417 | orchestrator |  { 2025-07-12 13:40:52.614428 | orchestrator |  "lv_name": "osd-block-f86cb3d6-0e78-5b6a-8369-843476bf59dc", 2025-07-12 13:40:52.614438 | orchestrator |  "vg_name": "ceph-f86cb3d6-0e78-5b6a-8369-843476bf59dc" 2025-07-12 13:40:52.614449 | orchestrator |  } 2025-07-12 13:40:52.614459 | orchestrator |  ], 2025-07-12 13:40:52.614470 | orchestrator |  "pv": [ 2025-07-12 13:40:52.614497 | orchestrator |  { 2025-07-12 13:40:52.614508 | orchestrator |  "pv_name": "/dev/sdb", 2025-07-12 13:40:52.614519 | orchestrator |  "vg_name": "ceph-f86cb3d6-0e78-5b6a-8369-843476bf59dc" 2025-07-12 13:40:52.614529 | orchestrator |  }, 2025-07-12 13:40:52.614540 | orchestrator |  { 2025-07-12 13:40:52.614550 | orchestrator |  "pv_name": "/dev/sdc", 2025-07-12 13:40:52.614561 | orchestrator |  "vg_name": "ceph-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a" 2025-07-12 13:40:52.614571 | orchestrator |  } 2025-07-12 13:40:52.614582 | orchestrator |  ] 2025-07-12 13:40:52.614592 | orchestrator |  } 2025-07-12 13:40:52.614624 | orchestrator | } 2025-07-12 13:40:52.614636 | orchestrator | 2025-07-12 13:40:52.614646 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-07-12 13:40:52.614657 | orchestrator | 2025-07-12 13:40:52.614668 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 13:40:52.614678 | orchestrator | Saturday 12 July 2025 13:40:49 +0000 (0:00:00.288) 0:00:24.620 ********* 2025-07-12 13:40:52.614689 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-07-12 13:40:52.614700 | orchestrator | 2025-07-12 13:40:52.614710 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-12 13:40:52.614721 | orchestrator | Saturday 12 July 2025 13:40:50 +0000 (0:00:00.245) 0:00:24.865 ********* 2025-07-12 13:40:52.614731 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:40:52.614742 | orchestrator | 2025-07-12 13:40:52.614752 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:52.614771 | orchestrator | Saturday 12 July 2025 13:40:50 +0000 (0:00:00.235) 0:00:25.100 ********* 2025-07-12 13:40:52.614782 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-07-12 13:40:52.614793 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-07-12 13:40:52.614804 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-07-12 13:40:52.614814 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-07-12 13:40:52.614825 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-07-12 13:40:52.614835 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-07-12 13:40:52.614846 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-07-12 13:40:52.614856 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-07-12 13:40:52.614866 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-07-12 13:40:52.614877 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-07-12 13:40:52.614887 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-07-12 13:40:52.614898 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-07-12 13:40:52.614908 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-07-12 13:40:52.614919 | orchestrator | 2025-07-12 13:40:52.614929 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:52.614939 | orchestrator | Saturday 12 July 2025 13:40:50 +0000 (0:00:00.410) 0:00:25.511 ********* 2025-07-12 13:40:52.614950 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:40:52.614960 | orchestrator | 2025-07-12 13:40:52.614971 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:52.614981 | orchestrator | Saturday 12 July 2025 13:40:51 +0000 (0:00:00.198) 0:00:25.710 ********* 2025-07-12 13:40:52.614991 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:40:52.615002 | orchestrator | 2025-07-12 13:40:52.615012 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:52.615023 | orchestrator | Saturday 12 July 2025 13:40:51 +0000 (0:00:00.189) 0:00:25.900 ********* 2025-07-12 13:40:52.615033 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:40:52.615043 | orchestrator | 2025-07-12 13:40:52.615054 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:52.615064 | orchestrator | Saturday 12 July 2025 13:40:51 +0000 (0:00:00.189) 0:00:26.090 ********* 2025-07-12 13:40:52.615075 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:40:52.615085 | orchestrator | 2025-07-12 13:40:52.615096 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:52.615107 | orchestrator | Saturday 12 July 2025 13:40:52 +0000 (0:00:00.610) 0:00:26.700 ********* 2025-07-12 13:40:52.615117 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:40:52.615127 | orchestrator | 2025-07-12 13:40:52.615138 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:52.615149 | orchestrator | Saturday 12 July 2025 13:40:52 +0000 (0:00:00.202) 0:00:26.902 ********* 2025-07-12 13:40:52.615164 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:40:52.615175 | orchestrator | 2025-07-12 13:40:52.615185 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:52.615196 | orchestrator | Saturday 12 July 2025 13:40:52 +0000 (0:00:00.187) 0:00:27.090 ********* 2025-07-12 13:40:52.615207 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:40:52.615217 | orchestrator | 2025-07-12 13:40:52.615234 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:02.886403 | orchestrator | Saturday 12 July 2025 13:40:52 +0000 (0:00:00.190) 0:00:27.281 ********* 2025-07-12 13:41:02.886545 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:02.886562 | orchestrator | 2025-07-12 13:41:02.886619 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:02.886632 | orchestrator | Saturday 12 July 2025 13:40:52 +0000 (0:00:00.224) 0:00:27.505 ********* 2025-07-12 13:41:02.886643 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1) 2025-07-12 13:41:02.886655 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1) 2025-07-12 13:41:02.886666 | orchestrator | 2025-07-12 13:41:02.886678 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:02.886688 | orchestrator | Saturday 12 July 2025 13:40:53 +0000 (0:00:00.426) 0:00:27.932 ********* 2025-07-12 13:41:02.886699 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b303d5ed-b20f-4882-90f3-23adead236a1) 2025-07-12 13:41:02.886724 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b303d5ed-b20f-4882-90f3-23adead236a1) 2025-07-12 13:41:02.886734 | orchestrator | 2025-07-12 13:41:02.886745 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:02.886756 | orchestrator | Saturday 12 July 2025 13:40:53 +0000 (0:00:00.418) 0:00:28.350 ********* 2025-07-12 13:41:02.886777 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_751344b4-b2fa-492b-b080-e9e5b4c67369) 2025-07-12 13:41:02.886788 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_751344b4-b2fa-492b-b080-e9e5b4c67369) 2025-07-12 13:41:02.886799 | orchestrator | 2025-07-12 13:41:02.886810 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:02.886820 | orchestrator | Saturday 12 July 2025 13:40:54 +0000 (0:00:00.415) 0:00:28.766 ********* 2025-07-12 13:41:02.886831 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_11616408-d26b-4882-b347-f5b812b9aa41) 2025-07-12 13:41:02.886842 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_11616408-d26b-4882-b347-f5b812b9aa41) 2025-07-12 13:41:02.886853 | orchestrator | 2025-07-12 13:41:02.886863 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:02.886874 | orchestrator | Saturday 12 July 2025 13:40:54 +0000 (0:00:00.426) 0:00:29.193 ********* 2025-07-12 13:41:02.886885 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-12 13:41:02.886897 | orchestrator | 2025-07-12 13:41:02.886909 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:02.886922 | orchestrator | Saturday 12 July 2025 13:40:54 +0000 (0:00:00.351) 0:00:29.544 ********* 2025-07-12 13:41:02.886935 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-07-12 13:41:02.886948 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-07-12 13:41:02.886960 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-07-12 13:41:02.886973 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-07-12 13:41:02.886985 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-07-12 13:41:02.886998 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-07-12 13:41:02.887011 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-07-12 13:41:02.887023 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-07-12 13:41:02.887035 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-07-12 13:41:02.887047 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-07-12 13:41:02.887060 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-07-12 13:41:02.887081 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-07-12 13:41:02.887094 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-07-12 13:41:02.887107 | orchestrator | 2025-07-12 13:41:02.887120 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:02.887132 | orchestrator | Saturday 12 July 2025 13:40:55 +0000 (0:00:00.621) 0:00:30.165 ********* 2025-07-12 13:41:02.887145 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:02.887158 | orchestrator | 2025-07-12 13:41:02.887170 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:02.887183 | orchestrator | Saturday 12 July 2025 13:40:55 +0000 (0:00:00.205) 0:00:30.371 ********* 2025-07-12 13:41:02.887195 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:02.887208 | orchestrator | 2025-07-12 13:41:02.887220 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:02.887246 | orchestrator | Saturday 12 July 2025 13:40:55 +0000 (0:00:00.202) 0:00:30.574 ********* 2025-07-12 13:41:02.887258 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:02.887269 | orchestrator | 2025-07-12 13:41:02.887280 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:02.887291 | orchestrator | Saturday 12 July 2025 13:40:56 +0000 (0:00:00.201) 0:00:30.775 ********* 2025-07-12 13:41:02.887301 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:02.887312 | orchestrator | 2025-07-12 13:41:02.887340 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:02.887352 | orchestrator | Saturday 12 July 2025 13:40:56 +0000 (0:00:00.222) 0:00:30.998 ********* 2025-07-12 13:41:02.887362 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:02.887373 | orchestrator | 2025-07-12 13:41:02.887384 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:02.887395 | orchestrator | Saturday 12 July 2025 13:40:56 +0000 (0:00:00.181) 0:00:31.179 ********* 2025-07-12 13:41:02.887406 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:02.887416 | orchestrator | 2025-07-12 13:41:02.887427 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:02.887438 | orchestrator | Saturday 12 July 2025 13:40:56 +0000 (0:00:00.209) 0:00:31.389 ********* 2025-07-12 13:41:02.887448 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:02.887459 | orchestrator | 2025-07-12 13:41:02.887470 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:02.887480 | orchestrator | Saturday 12 July 2025 13:40:56 +0000 (0:00:00.211) 0:00:31.600 ********* 2025-07-12 13:41:02.887491 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:02.887502 | orchestrator | 2025-07-12 13:41:02.887512 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:02.887523 | orchestrator | Saturday 12 July 2025 13:40:57 +0000 (0:00:00.197) 0:00:31.797 ********* 2025-07-12 13:41:02.887534 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-07-12 13:41:02.887545 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-07-12 13:41:02.887555 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-07-12 13:41:02.887566 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-07-12 13:41:02.887599 | orchestrator | 2025-07-12 13:41:02.887610 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:02.887621 | orchestrator | Saturday 12 July 2025 13:40:57 +0000 (0:00:00.819) 0:00:32.617 ********* 2025-07-12 13:41:02.887632 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:02.887643 | orchestrator | 2025-07-12 13:41:02.887653 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:02.887664 | orchestrator | Saturday 12 July 2025 13:40:58 +0000 (0:00:00.198) 0:00:32.816 ********* 2025-07-12 13:41:02.887675 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:02.887686 | orchestrator | 2025-07-12 13:41:02.887704 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:02.887714 | orchestrator | Saturday 12 July 2025 13:40:58 +0000 (0:00:00.204) 0:00:33.021 ********* 2025-07-12 13:41:02.887725 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:02.887736 | orchestrator | 2025-07-12 13:41:02.887747 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:02.887757 | orchestrator | Saturday 12 July 2025 13:40:58 +0000 (0:00:00.647) 0:00:33.668 ********* 2025-07-12 13:41:02.887768 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:02.887778 | orchestrator | 2025-07-12 13:41:02.887789 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-07-12 13:41:02.887800 | orchestrator | Saturday 12 July 2025 13:40:59 +0000 (0:00:00.214) 0:00:33.883 ********* 2025-07-12 13:41:02.887811 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:02.887821 | orchestrator | 2025-07-12 13:41:02.887832 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-07-12 13:41:02.887843 | orchestrator | Saturday 12 July 2025 13:40:59 +0000 (0:00:00.159) 0:00:34.042 ********* 2025-07-12 13:41:02.887853 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8be3c046-75c4-5df6-b59b-0076bb3a4ccd'}}) 2025-07-12 13:41:02.887864 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f8ec8ce8-a083-5a5f-ae06-780cf5acbe42'}}) 2025-07-12 13:41:02.887875 | orchestrator | 2025-07-12 13:41:02.887886 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-07-12 13:41:02.887897 | orchestrator | Saturday 12 July 2025 13:40:59 +0000 (0:00:00.190) 0:00:34.233 ********* 2025-07-12 13:41:02.887909 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8be3c046-75c4-5df6-b59b-0076bb3a4ccd', 'data_vg': 'ceph-8be3c046-75c4-5df6-b59b-0076bb3a4ccd'}) 2025-07-12 13:41:02.887921 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42', 'data_vg': 'ceph-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42'}) 2025-07-12 13:41:02.887932 | orchestrator | 2025-07-12 13:41:02.887942 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-07-12 13:41:02.887953 | orchestrator | Saturday 12 July 2025 13:41:01 +0000 (0:00:01.813) 0:00:36.046 ********* 2025-07-12 13:41:02.887964 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8be3c046-75c4-5df6-b59b-0076bb3a4ccd', 'data_vg': 'ceph-8be3c046-75c4-5df6-b59b-0076bb3a4ccd'})  2025-07-12 13:41:02.887976 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42', 'data_vg': 'ceph-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42'})  2025-07-12 13:41:02.887986 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:02.887997 | orchestrator | 2025-07-12 13:41:02.888007 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-07-12 13:41:02.888018 | orchestrator | Saturday 12 July 2025 13:41:01 +0000 (0:00:00.173) 0:00:36.220 ********* 2025-07-12 13:41:02.888029 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8be3c046-75c4-5df6-b59b-0076bb3a4ccd', 'data_vg': 'ceph-8be3c046-75c4-5df6-b59b-0076bb3a4ccd'}) 2025-07-12 13:41:02.888040 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42', 'data_vg': 'ceph-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42'}) 2025-07-12 13:41:02.888050 | orchestrator | 2025-07-12 13:41:02.888068 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-07-12 13:41:08.383078 | orchestrator | Saturday 12 July 2025 13:41:02 +0000 (0:00:01.331) 0:00:37.551 ********* 2025-07-12 13:41:08.383195 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8be3c046-75c4-5df6-b59b-0076bb3a4ccd', 'data_vg': 'ceph-8be3c046-75c4-5df6-b59b-0076bb3a4ccd'})  2025-07-12 13:41:08.383213 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42', 'data_vg': 'ceph-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42'})  2025-07-12 13:41:08.383249 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:08.383263 | orchestrator | 2025-07-12 13:41:08.383275 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-07-12 13:41:08.383287 | orchestrator | Saturday 12 July 2025 13:41:03 +0000 (0:00:00.167) 0:00:37.719 ********* 2025-07-12 13:41:08.383298 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:08.383308 | orchestrator | 2025-07-12 13:41:08.383320 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-07-12 13:41:08.383332 | orchestrator | Saturday 12 July 2025 13:41:03 +0000 (0:00:00.129) 0:00:37.848 ********* 2025-07-12 13:41:08.383343 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8be3c046-75c4-5df6-b59b-0076bb3a4ccd', 'data_vg': 'ceph-8be3c046-75c4-5df6-b59b-0076bb3a4ccd'})  2025-07-12 13:41:08.383355 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42', 'data_vg': 'ceph-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42'})  2025-07-12 13:41:08.383366 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:08.383378 | orchestrator | 2025-07-12 13:41:08.383389 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-07-12 13:41:08.383401 | orchestrator | Saturday 12 July 2025 13:41:03 +0000 (0:00:00.153) 0:00:38.002 ********* 2025-07-12 13:41:08.383412 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:08.383423 | orchestrator | 2025-07-12 13:41:08.383435 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-07-12 13:41:08.383445 | orchestrator | Saturday 12 July 2025 13:41:03 +0000 (0:00:00.137) 0:00:38.140 ********* 2025-07-12 13:41:08.383457 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8be3c046-75c4-5df6-b59b-0076bb3a4ccd', 'data_vg': 'ceph-8be3c046-75c4-5df6-b59b-0076bb3a4ccd'})  2025-07-12 13:41:08.383468 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42', 'data_vg': 'ceph-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42'})  2025-07-12 13:41:08.383480 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:08.383491 | orchestrator | 2025-07-12 13:41:08.383502 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-07-12 13:41:08.383513 | orchestrator | Saturday 12 July 2025 13:41:03 +0000 (0:00:00.145) 0:00:38.285 ********* 2025-07-12 13:41:08.383524 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:08.383536 | orchestrator | 2025-07-12 13:41:08.383547 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-07-12 13:41:08.383557 | orchestrator | Saturday 12 July 2025 13:41:03 +0000 (0:00:00.322) 0:00:38.608 ********* 2025-07-12 13:41:08.383622 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8be3c046-75c4-5df6-b59b-0076bb3a4ccd', 'data_vg': 'ceph-8be3c046-75c4-5df6-b59b-0076bb3a4ccd'})  2025-07-12 13:41:08.383634 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42', 'data_vg': 'ceph-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42'})  2025-07-12 13:41:08.383647 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:08.383659 | orchestrator | 2025-07-12 13:41:08.383671 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-07-12 13:41:08.383684 | orchestrator | Saturday 12 July 2025 13:41:04 +0000 (0:00:00.144) 0:00:38.752 ********* 2025-07-12 13:41:08.383697 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:41:08.383710 | orchestrator | 2025-07-12 13:41:08.383723 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-07-12 13:41:08.383736 | orchestrator | Saturday 12 July 2025 13:41:04 +0000 (0:00:00.141) 0:00:38.894 ********* 2025-07-12 13:41:08.383748 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8be3c046-75c4-5df6-b59b-0076bb3a4ccd', 'data_vg': 'ceph-8be3c046-75c4-5df6-b59b-0076bb3a4ccd'})  2025-07-12 13:41:08.383761 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42', 'data_vg': 'ceph-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42'})  2025-07-12 13:41:08.383773 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:08.383794 | orchestrator | 2025-07-12 13:41:08.383806 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-07-12 13:41:08.383818 | orchestrator | Saturday 12 July 2025 13:41:04 +0000 (0:00:00.155) 0:00:39.049 ********* 2025-07-12 13:41:08.383831 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8be3c046-75c4-5df6-b59b-0076bb3a4ccd', 'data_vg': 'ceph-8be3c046-75c4-5df6-b59b-0076bb3a4ccd'})  2025-07-12 13:41:08.383851 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42', 'data_vg': 'ceph-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42'})  2025-07-12 13:41:08.383862 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:08.383873 | orchestrator | 2025-07-12 13:41:08.383883 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-07-12 13:41:08.383894 | orchestrator | Saturday 12 July 2025 13:41:04 +0000 (0:00:00.155) 0:00:39.204 ********* 2025-07-12 13:41:08.383923 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8be3c046-75c4-5df6-b59b-0076bb3a4ccd', 'data_vg': 'ceph-8be3c046-75c4-5df6-b59b-0076bb3a4ccd'})  2025-07-12 13:41:08.383935 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42', 'data_vg': 'ceph-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42'})  2025-07-12 13:41:08.383946 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:08.383956 | orchestrator | 2025-07-12 13:41:08.383967 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-07-12 13:41:08.383977 | orchestrator | Saturday 12 July 2025 13:41:04 +0000 (0:00:00.155) 0:00:39.359 ********* 2025-07-12 13:41:08.383988 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:08.383998 | orchestrator | 2025-07-12 13:41:08.384009 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-07-12 13:41:08.384019 | orchestrator | Saturday 12 July 2025 13:41:04 +0000 (0:00:00.137) 0:00:39.497 ********* 2025-07-12 13:41:08.384030 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:08.384041 | orchestrator | 2025-07-12 13:41:08.384051 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-07-12 13:41:08.384062 | orchestrator | Saturday 12 July 2025 13:41:04 +0000 (0:00:00.137) 0:00:39.635 ********* 2025-07-12 13:41:08.384072 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:08.384083 | orchestrator | 2025-07-12 13:41:08.384093 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-07-12 13:41:08.384104 | orchestrator | Saturday 12 July 2025 13:41:05 +0000 (0:00:00.140) 0:00:39.775 ********* 2025-07-12 13:41:08.384115 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 13:41:08.384125 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-07-12 13:41:08.384136 | orchestrator | } 2025-07-12 13:41:08.384147 | orchestrator | 2025-07-12 13:41:08.384157 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-07-12 13:41:08.384168 | orchestrator | Saturday 12 July 2025 13:41:05 +0000 (0:00:00.132) 0:00:39.907 ********* 2025-07-12 13:41:08.384179 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 13:41:08.384189 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-07-12 13:41:08.384200 | orchestrator | } 2025-07-12 13:41:08.384210 | orchestrator | 2025-07-12 13:41:08.384221 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-07-12 13:41:08.384232 | orchestrator | Saturday 12 July 2025 13:41:05 +0000 (0:00:00.145) 0:00:40.053 ********* 2025-07-12 13:41:08.384242 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 13:41:08.384253 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-07-12 13:41:08.384263 | orchestrator | } 2025-07-12 13:41:08.384274 | orchestrator | 2025-07-12 13:41:08.384285 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-07-12 13:41:08.384295 | orchestrator | Saturday 12 July 2025 13:41:05 +0000 (0:00:00.141) 0:00:40.195 ********* 2025-07-12 13:41:08.384306 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:41:08.384316 | orchestrator | 2025-07-12 13:41:08.384327 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-07-12 13:41:08.384345 | orchestrator | Saturday 12 July 2025 13:41:06 +0000 (0:00:00.732) 0:00:40.928 ********* 2025-07-12 13:41:08.384356 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:41:08.384366 | orchestrator | 2025-07-12 13:41:08.384377 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-07-12 13:41:08.384388 | orchestrator | Saturday 12 July 2025 13:41:06 +0000 (0:00:00.505) 0:00:41.434 ********* 2025-07-12 13:41:08.384398 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:41:08.384409 | orchestrator | 2025-07-12 13:41:08.384419 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-07-12 13:41:08.384430 | orchestrator | Saturday 12 July 2025 13:41:07 +0000 (0:00:00.522) 0:00:41.956 ********* 2025-07-12 13:41:08.384440 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:41:08.384451 | orchestrator | 2025-07-12 13:41:08.384462 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-07-12 13:41:08.384472 | orchestrator | Saturday 12 July 2025 13:41:07 +0000 (0:00:00.146) 0:00:42.102 ********* 2025-07-12 13:41:08.384483 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:08.384493 | orchestrator | 2025-07-12 13:41:08.384504 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-07-12 13:41:08.384515 | orchestrator | Saturday 12 July 2025 13:41:07 +0000 (0:00:00.115) 0:00:42.218 ********* 2025-07-12 13:41:08.384525 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:08.384536 | orchestrator | 2025-07-12 13:41:08.384547 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-07-12 13:41:08.384557 | orchestrator | Saturday 12 July 2025 13:41:07 +0000 (0:00:00.107) 0:00:42.326 ********* 2025-07-12 13:41:08.384613 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 13:41:08.384633 | orchestrator |  "vgs_report": { 2025-07-12 13:41:08.384652 | orchestrator |  "vg": [] 2025-07-12 13:41:08.384665 | orchestrator |  } 2025-07-12 13:41:08.384675 | orchestrator | } 2025-07-12 13:41:08.384686 | orchestrator | 2025-07-12 13:41:08.384697 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-07-12 13:41:08.384707 | orchestrator | Saturday 12 July 2025 13:41:07 +0000 (0:00:00.138) 0:00:42.465 ********* 2025-07-12 13:41:08.384718 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:08.384728 | orchestrator | 2025-07-12 13:41:08.384739 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-07-12 13:41:08.384750 | orchestrator | Saturday 12 July 2025 13:41:07 +0000 (0:00:00.135) 0:00:42.600 ********* 2025-07-12 13:41:08.384760 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:08.384771 | orchestrator | 2025-07-12 13:41:08.384781 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-07-12 13:41:08.384792 | orchestrator | Saturday 12 July 2025 13:41:08 +0000 (0:00:00.136) 0:00:42.737 ********* 2025-07-12 13:41:08.384808 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:08.384819 | orchestrator | 2025-07-12 13:41:08.384830 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-07-12 13:41:08.384840 | orchestrator | Saturday 12 July 2025 13:41:08 +0000 (0:00:00.150) 0:00:42.887 ********* 2025-07-12 13:41:08.384851 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:08.384861 | orchestrator | 2025-07-12 13:41:08.384872 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-07-12 13:41:08.384890 | orchestrator | Saturday 12 July 2025 13:41:08 +0000 (0:00:00.164) 0:00:43.051 ********* 2025-07-12 13:41:13.151150 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:13.151263 | orchestrator | 2025-07-12 13:41:13.151280 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-07-12 13:41:13.151293 | orchestrator | Saturday 12 July 2025 13:41:08 +0000 (0:00:00.135) 0:00:43.187 ********* 2025-07-12 13:41:13.151304 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:13.151315 | orchestrator | 2025-07-12 13:41:13.151327 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-07-12 13:41:13.151338 | orchestrator | Saturday 12 July 2025 13:41:08 +0000 (0:00:00.344) 0:00:43.531 ********* 2025-07-12 13:41:13.151374 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:13.151386 | orchestrator | 2025-07-12 13:41:13.151397 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-07-12 13:41:13.151407 | orchestrator | Saturday 12 July 2025 13:41:08 +0000 (0:00:00.137) 0:00:43.669 ********* 2025-07-12 13:41:13.151418 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:13.151429 | orchestrator | 2025-07-12 13:41:13.151440 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-07-12 13:41:13.151451 | orchestrator | Saturday 12 July 2025 13:41:09 +0000 (0:00:00.140) 0:00:43.809 ********* 2025-07-12 13:41:13.151461 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:13.151472 | orchestrator | 2025-07-12 13:41:13.151483 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-07-12 13:41:13.151493 | orchestrator | Saturday 12 July 2025 13:41:09 +0000 (0:00:00.166) 0:00:43.976 ********* 2025-07-12 13:41:13.151504 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:13.151515 | orchestrator | 2025-07-12 13:41:13.151526 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-07-12 13:41:13.151536 | orchestrator | Saturday 12 July 2025 13:41:09 +0000 (0:00:00.139) 0:00:44.115 ********* 2025-07-12 13:41:13.151603 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:13.151616 | orchestrator | 2025-07-12 13:41:13.151627 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-07-12 13:41:13.151638 | orchestrator | Saturday 12 July 2025 13:41:09 +0000 (0:00:00.140) 0:00:44.255 ********* 2025-07-12 13:41:13.151648 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:13.151659 | orchestrator | 2025-07-12 13:41:13.151670 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-07-12 13:41:13.151682 | orchestrator | Saturday 12 July 2025 13:41:09 +0000 (0:00:00.144) 0:00:44.400 ********* 2025-07-12 13:41:13.151692 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:13.151703 | orchestrator | 2025-07-12 13:41:13.151714 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-07-12 13:41:13.151724 | orchestrator | Saturday 12 July 2025 13:41:09 +0000 (0:00:00.141) 0:00:44.541 ********* 2025-07-12 13:41:13.151735 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:13.151746 | orchestrator | 2025-07-12 13:41:13.151756 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-07-12 13:41:13.151767 | orchestrator | Saturday 12 July 2025 13:41:10 +0000 (0:00:00.142) 0:00:44.683 ********* 2025-07-12 13:41:13.151779 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8be3c046-75c4-5df6-b59b-0076bb3a4ccd', 'data_vg': 'ceph-8be3c046-75c4-5df6-b59b-0076bb3a4ccd'})  2025-07-12 13:41:13.151792 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42', 'data_vg': 'ceph-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42'})  2025-07-12 13:41:13.151803 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:13.151814 | orchestrator | 2025-07-12 13:41:13.151824 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-07-12 13:41:13.151835 | orchestrator | Saturday 12 July 2025 13:41:10 +0000 (0:00:00.151) 0:00:44.834 ********* 2025-07-12 13:41:13.151846 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8be3c046-75c4-5df6-b59b-0076bb3a4ccd', 'data_vg': 'ceph-8be3c046-75c4-5df6-b59b-0076bb3a4ccd'})  2025-07-12 13:41:13.151857 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42', 'data_vg': 'ceph-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42'})  2025-07-12 13:41:13.151867 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:13.151878 | orchestrator | 2025-07-12 13:41:13.151888 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-07-12 13:41:13.151899 | orchestrator | Saturday 12 July 2025 13:41:10 +0000 (0:00:00.150) 0:00:44.985 ********* 2025-07-12 13:41:13.151910 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8be3c046-75c4-5df6-b59b-0076bb3a4ccd', 'data_vg': 'ceph-8be3c046-75c4-5df6-b59b-0076bb3a4ccd'})  2025-07-12 13:41:13.151928 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42', 'data_vg': 'ceph-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42'})  2025-07-12 13:41:13.151939 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:13.151950 | orchestrator | 2025-07-12 13:41:13.151960 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-07-12 13:41:13.151971 | orchestrator | Saturday 12 July 2025 13:41:10 +0000 (0:00:00.154) 0:00:45.139 ********* 2025-07-12 13:41:13.151982 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8be3c046-75c4-5df6-b59b-0076bb3a4ccd', 'data_vg': 'ceph-8be3c046-75c4-5df6-b59b-0076bb3a4ccd'})  2025-07-12 13:41:13.151993 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42', 'data_vg': 'ceph-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42'})  2025-07-12 13:41:13.152004 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:13.152014 | orchestrator | 2025-07-12 13:41:13.152025 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-07-12 13:41:13.152055 | orchestrator | Saturday 12 July 2025 13:41:10 +0000 (0:00:00.371) 0:00:45.510 ********* 2025-07-12 13:41:13.152066 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8be3c046-75c4-5df6-b59b-0076bb3a4ccd', 'data_vg': 'ceph-8be3c046-75c4-5df6-b59b-0076bb3a4ccd'})  2025-07-12 13:41:13.152077 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42', 'data_vg': 'ceph-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42'})  2025-07-12 13:41:13.152088 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:13.152098 | orchestrator | 2025-07-12 13:41:13.152109 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-07-12 13:41:13.152120 | orchestrator | Saturday 12 July 2025 13:41:10 +0000 (0:00:00.160) 0:00:45.670 ********* 2025-07-12 13:41:13.152130 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8be3c046-75c4-5df6-b59b-0076bb3a4ccd', 'data_vg': 'ceph-8be3c046-75c4-5df6-b59b-0076bb3a4ccd'})  2025-07-12 13:41:13.152141 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42', 'data_vg': 'ceph-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42'})  2025-07-12 13:41:13.152152 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:13.152162 | orchestrator | 2025-07-12 13:41:13.152173 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-07-12 13:41:13.152183 | orchestrator | Saturday 12 July 2025 13:41:11 +0000 (0:00:00.151) 0:00:45.822 ********* 2025-07-12 13:41:13.152242 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8be3c046-75c4-5df6-b59b-0076bb3a4ccd', 'data_vg': 'ceph-8be3c046-75c4-5df6-b59b-0076bb3a4ccd'})  2025-07-12 13:41:13.152254 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42', 'data_vg': 'ceph-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42'})  2025-07-12 13:41:13.152265 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:13.152275 | orchestrator | 2025-07-12 13:41:13.152286 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-07-12 13:41:13.152296 | orchestrator | Saturday 12 July 2025 13:41:11 +0000 (0:00:00.159) 0:00:45.982 ********* 2025-07-12 13:41:13.152307 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8be3c046-75c4-5df6-b59b-0076bb3a4ccd', 'data_vg': 'ceph-8be3c046-75c4-5df6-b59b-0076bb3a4ccd'})  2025-07-12 13:41:13.152318 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42', 'data_vg': 'ceph-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42'})  2025-07-12 13:41:13.152328 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:13.152339 | orchestrator | 2025-07-12 13:41:13.152349 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-07-12 13:41:13.152360 | orchestrator | Saturday 12 July 2025 13:41:11 +0000 (0:00:00.158) 0:00:46.141 ********* 2025-07-12 13:41:13.152377 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:41:13.152388 | orchestrator | 2025-07-12 13:41:13.152398 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-07-12 13:41:13.152409 | orchestrator | Saturday 12 July 2025 13:41:11 +0000 (0:00:00.517) 0:00:46.659 ********* 2025-07-12 13:41:13.152419 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:41:13.152430 | orchestrator | 2025-07-12 13:41:13.152440 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-07-12 13:41:13.152450 | orchestrator | Saturday 12 July 2025 13:41:12 +0000 (0:00:00.518) 0:00:47.177 ********* 2025-07-12 13:41:13.152461 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:41:13.152471 | orchestrator | 2025-07-12 13:41:13.152482 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-07-12 13:41:13.152492 | orchestrator | Saturday 12 July 2025 13:41:12 +0000 (0:00:00.149) 0:00:47.326 ********* 2025-07-12 13:41:13.152503 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-8be3c046-75c4-5df6-b59b-0076bb3a4ccd', 'vg_name': 'ceph-8be3c046-75c4-5df6-b59b-0076bb3a4ccd'}) 2025-07-12 13:41:13.152514 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42', 'vg_name': 'ceph-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42'}) 2025-07-12 13:41:13.152525 | orchestrator | 2025-07-12 13:41:13.152535 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-07-12 13:41:13.152545 | orchestrator | Saturday 12 July 2025 13:41:12 +0000 (0:00:00.170) 0:00:47.497 ********* 2025-07-12 13:41:13.152576 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8be3c046-75c4-5df6-b59b-0076bb3a4ccd', 'data_vg': 'ceph-8be3c046-75c4-5df6-b59b-0076bb3a4ccd'})  2025-07-12 13:41:13.152587 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42', 'data_vg': 'ceph-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42'})  2025-07-12 13:41:13.152598 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:13.152608 | orchestrator | 2025-07-12 13:41:13.152619 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-07-12 13:41:13.152634 | orchestrator | Saturday 12 July 2025 13:41:12 +0000 (0:00:00.164) 0:00:47.662 ********* 2025-07-12 13:41:13.152645 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8be3c046-75c4-5df6-b59b-0076bb3a4ccd', 'data_vg': 'ceph-8be3c046-75c4-5df6-b59b-0076bb3a4ccd'})  2025-07-12 13:41:13.152656 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42', 'data_vg': 'ceph-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42'})  2025-07-12 13:41:13.152674 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:19.351877 | orchestrator | 2025-07-12 13:41:19.352035 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-07-12 13:41:19.352056 | orchestrator | Saturday 12 July 2025 13:41:13 +0000 (0:00:00.156) 0:00:47.819 ********* 2025-07-12 13:41:19.352069 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8be3c046-75c4-5df6-b59b-0076bb3a4ccd', 'data_vg': 'ceph-8be3c046-75c4-5df6-b59b-0076bb3a4ccd'})  2025-07-12 13:41:19.352083 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42', 'data_vg': 'ceph-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42'})  2025-07-12 13:41:19.352094 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:19.352105 | orchestrator | 2025-07-12 13:41:19.352117 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-07-12 13:41:19.352127 | orchestrator | Saturday 12 July 2025 13:41:13 +0000 (0:00:00.157) 0:00:47.976 ********* 2025-07-12 13:41:19.352138 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 13:41:19.352149 | orchestrator |  "lvm_report": { 2025-07-12 13:41:19.352161 | orchestrator |  "lv": [ 2025-07-12 13:41:19.352172 | orchestrator |  { 2025-07-12 13:41:19.352182 | orchestrator |  "lv_name": "osd-block-8be3c046-75c4-5df6-b59b-0076bb3a4ccd", 2025-07-12 13:41:19.352194 | orchestrator |  "vg_name": "ceph-8be3c046-75c4-5df6-b59b-0076bb3a4ccd" 2025-07-12 13:41:19.352231 | orchestrator |  }, 2025-07-12 13:41:19.352242 | orchestrator |  { 2025-07-12 13:41:19.352253 | orchestrator |  "lv_name": "osd-block-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42", 2025-07-12 13:41:19.352263 | orchestrator |  "vg_name": "ceph-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42" 2025-07-12 13:41:19.352274 | orchestrator |  } 2025-07-12 13:41:19.352284 | orchestrator |  ], 2025-07-12 13:41:19.352299 | orchestrator |  "pv": [ 2025-07-12 13:41:19.352310 | orchestrator |  { 2025-07-12 13:41:19.352320 | orchestrator |  "pv_name": "/dev/sdb", 2025-07-12 13:41:19.352331 | orchestrator |  "vg_name": "ceph-8be3c046-75c4-5df6-b59b-0076bb3a4ccd" 2025-07-12 13:41:19.352342 | orchestrator |  }, 2025-07-12 13:41:19.352352 | orchestrator |  { 2025-07-12 13:41:19.352363 | orchestrator |  "pv_name": "/dev/sdc", 2025-07-12 13:41:19.352373 | orchestrator |  "vg_name": "ceph-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42" 2025-07-12 13:41:19.352398 | orchestrator |  } 2025-07-12 13:41:19.352409 | orchestrator |  ] 2025-07-12 13:41:19.352430 | orchestrator |  } 2025-07-12 13:41:19.352441 | orchestrator | } 2025-07-12 13:41:19.352453 | orchestrator | 2025-07-12 13:41:19.352472 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-07-12 13:41:19.352488 | orchestrator | 2025-07-12 13:41:19.352499 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 13:41:19.352509 | orchestrator | Saturday 12 July 2025 13:41:13 +0000 (0:00:00.514) 0:00:48.491 ********* 2025-07-12 13:41:19.352520 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-07-12 13:41:19.352531 | orchestrator | 2025-07-12 13:41:19.352631 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-12 13:41:19.352644 | orchestrator | Saturday 12 July 2025 13:41:14 +0000 (0:00:00.247) 0:00:48.738 ********* 2025-07-12 13:41:19.352655 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:41:19.352665 | orchestrator | 2025-07-12 13:41:19.352676 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:19.352687 | orchestrator | Saturday 12 July 2025 13:41:14 +0000 (0:00:00.240) 0:00:48.978 ********* 2025-07-12 13:41:19.352697 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-07-12 13:41:19.352708 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-07-12 13:41:19.352718 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-07-12 13:41:19.352729 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-07-12 13:41:19.352739 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-07-12 13:41:19.352750 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-07-12 13:41:19.352760 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-07-12 13:41:19.352771 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-07-12 13:41:19.352782 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-07-12 13:41:19.352792 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-07-12 13:41:19.352803 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-07-12 13:41:19.352813 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-07-12 13:41:19.352824 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-07-12 13:41:19.352859 | orchestrator | 2025-07-12 13:41:19.352869 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:19.352895 | orchestrator | Saturday 12 July 2025 13:41:14 +0000 (0:00:00.419) 0:00:49.398 ********* 2025-07-12 13:41:19.352916 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:19.352927 | orchestrator | 2025-07-12 13:41:19.352937 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:19.352948 | orchestrator | Saturday 12 July 2025 13:41:14 +0000 (0:00:00.205) 0:00:49.603 ********* 2025-07-12 13:41:19.352959 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:19.352969 | orchestrator | 2025-07-12 13:41:19.352980 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:19.353009 | orchestrator | Saturday 12 July 2025 13:41:15 +0000 (0:00:00.222) 0:00:49.826 ********* 2025-07-12 13:41:19.353020 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:19.353031 | orchestrator | 2025-07-12 13:41:19.353042 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:19.353052 | orchestrator | Saturday 12 July 2025 13:41:15 +0000 (0:00:00.196) 0:00:50.023 ********* 2025-07-12 13:41:19.353063 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:19.353073 | orchestrator | 2025-07-12 13:41:19.353084 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:19.353095 | orchestrator | Saturday 12 July 2025 13:41:15 +0000 (0:00:00.188) 0:00:50.211 ********* 2025-07-12 13:41:19.353105 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:19.353159 | orchestrator | 2025-07-12 13:41:19.353171 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:19.353182 | orchestrator | Saturday 12 July 2025 13:41:15 +0000 (0:00:00.213) 0:00:50.425 ********* 2025-07-12 13:41:19.353192 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:19.353203 | orchestrator | 2025-07-12 13:41:19.353213 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:19.353224 | orchestrator | Saturday 12 July 2025 13:41:16 +0000 (0:00:00.635) 0:00:51.061 ********* 2025-07-12 13:41:19.353234 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:19.353245 | orchestrator | 2025-07-12 13:41:19.353255 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:19.353266 | orchestrator | Saturday 12 July 2025 13:41:16 +0000 (0:00:00.213) 0:00:51.275 ********* 2025-07-12 13:41:19.353276 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:19.353287 | orchestrator | 2025-07-12 13:41:19.353297 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:19.353308 | orchestrator | Saturday 12 July 2025 13:41:16 +0000 (0:00:00.204) 0:00:51.479 ********* 2025-07-12 13:41:19.353318 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968) 2025-07-12 13:41:19.353330 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968) 2025-07-12 13:41:19.353341 | orchestrator | 2025-07-12 13:41:19.353351 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:19.353388 | orchestrator | Saturday 12 July 2025 13:41:17 +0000 (0:00:00.435) 0:00:51.915 ********* 2025-07-12 13:41:19.353400 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_375d9971-3091-4ee7-ad22-0f2ee4316c51) 2025-07-12 13:41:19.353446 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_375d9971-3091-4ee7-ad22-0f2ee4316c51) 2025-07-12 13:41:19.353459 | orchestrator | 2025-07-12 13:41:19.353470 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:19.353480 | orchestrator | Saturday 12 July 2025 13:41:17 +0000 (0:00:00.474) 0:00:52.390 ********* 2025-07-12 13:41:19.353491 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_80678cc2-85df-4096-9cf9-3a4ced065123) 2025-07-12 13:41:19.353502 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_80678cc2-85df-4096-9cf9-3a4ced065123) 2025-07-12 13:41:19.353512 | orchestrator | 2025-07-12 13:41:19.353523 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:19.353585 | orchestrator | Saturday 12 July 2025 13:41:18 +0000 (0:00:00.434) 0:00:52.824 ********* 2025-07-12 13:41:19.353607 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1e2296ca-3498-48cf-a25a-293306b54174) 2025-07-12 13:41:19.353618 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1e2296ca-3498-48cf-a25a-293306b54174) 2025-07-12 13:41:19.353628 | orchestrator | 2025-07-12 13:41:19.353639 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:19.353650 | orchestrator | Saturday 12 July 2025 13:41:18 +0000 (0:00:00.424) 0:00:53.248 ********* 2025-07-12 13:41:19.353660 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-12 13:41:19.353670 | orchestrator | 2025-07-12 13:41:19.353681 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:19.353691 | orchestrator | Saturday 12 July 2025 13:41:18 +0000 (0:00:00.351) 0:00:53.600 ********* 2025-07-12 13:41:19.353702 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-07-12 13:41:19.353712 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-07-12 13:41:19.353723 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-07-12 13:41:19.353733 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-07-12 13:41:19.353743 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-07-12 13:41:19.353754 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-07-12 13:41:19.353764 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-07-12 13:41:19.353780 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-07-12 13:41:19.353791 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-07-12 13:41:19.353801 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-07-12 13:41:19.353812 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-07-12 13:41:19.353830 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-07-12 13:41:28.174810 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-07-12 13:41:28.174929 | orchestrator | 2025-07-12 13:41:28.174946 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:28.174959 | orchestrator | Saturday 12 July 2025 13:41:19 +0000 (0:00:00.414) 0:00:54.015 ********* 2025-07-12 13:41:28.174970 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:28.174982 | orchestrator | 2025-07-12 13:41:28.174994 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:28.175005 | orchestrator | Saturday 12 July 2025 13:41:19 +0000 (0:00:00.215) 0:00:54.230 ********* 2025-07-12 13:41:28.175016 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:28.175027 | orchestrator | 2025-07-12 13:41:28.175038 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:28.175049 | orchestrator | Saturday 12 July 2025 13:41:19 +0000 (0:00:00.206) 0:00:54.437 ********* 2025-07-12 13:41:28.175060 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:28.175071 | orchestrator | 2025-07-12 13:41:28.175082 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:28.175092 | orchestrator | Saturday 12 July 2025 13:41:20 +0000 (0:00:00.590) 0:00:55.028 ********* 2025-07-12 13:41:28.175103 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:28.175114 | orchestrator | 2025-07-12 13:41:28.175124 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:28.175135 | orchestrator | Saturday 12 July 2025 13:41:20 +0000 (0:00:00.209) 0:00:55.238 ********* 2025-07-12 13:41:28.175166 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:28.175178 | orchestrator | 2025-07-12 13:41:28.175188 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:28.175199 | orchestrator | Saturday 12 July 2025 13:41:20 +0000 (0:00:00.208) 0:00:55.446 ********* 2025-07-12 13:41:28.175210 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:28.175220 | orchestrator | 2025-07-12 13:41:28.175231 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:28.175242 | orchestrator | Saturday 12 July 2025 13:41:20 +0000 (0:00:00.197) 0:00:55.643 ********* 2025-07-12 13:41:28.175252 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:28.175263 | orchestrator | 2025-07-12 13:41:28.175273 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:28.175287 | orchestrator | Saturday 12 July 2025 13:41:21 +0000 (0:00:00.195) 0:00:55.839 ********* 2025-07-12 13:41:28.175299 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:28.175311 | orchestrator | 2025-07-12 13:41:28.175323 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:28.175335 | orchestrator | Saturday 12 July 2025 13:41:21 +0000 (0:00:00.203) 0:00:56.042 ********* 2025-07-12 13:41:28.175348 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-07-12 13:41:28.175361 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-07-12 13:41:28.175373 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-07-12 13:41:28.175386 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-07-12 13:41:28.175398 | orchestrator | 2025-07-12 13:41:28.175410 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:28.175423 | orchestrator | Saturday 12 July 2025 13:41:22 +0000 (0:00:00.654) 0:00:56.697 ********* 2025-07-12 13:41:28.175435 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:28.175447 | orchestrator | 2025-07-12 13:41:28.175459 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:28.175472 | orchestrator | Saturday 12 July 2025 13:41:22 +0000 (0:00:00.188) 0:00:56.885 ********* 2025-07-12 13:41:28.175484 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:28.175497 | orchestrator | 2025-07-12 13:41:28.175509 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:28.175555 | orchestrator | Saturday 12 July 2025 13:41:22 +0000 (0:00:00.203) 0:00:57.089 ********* 2025-07-12 13:41:28.175568 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:28.175580 | orchestrator | 2025-07-12 13:41:28.175591 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:28.175604 | orchestrator | Saturday 12 July 2025 13:41:22 +0000 (0:00:00.187) 0:00:57.277 ********* 2025-07-12 13:41:28.175616 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:28.175628 | orchestrator | 2025-07-12 13:41:28.175639 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-07-12 13:41:28.175650 | orchestrator | Saturday 12 July 2025 13:41:22 +0000 (0:00:00.195) 0:00:57.472 ********* 2025-07-12 13:41:28.175661 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:28.175671 | orchestrator | 2025-07-12 13:41:28.175682 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-07-12 13:41:28.175693 | orchestrator | Saturday 12 July 2025 13:41:23 +0000 (0:00:00.345) 0:00:57.817 ********* 2025-07-12 13:41:28.175703 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '76cf46ce-80cb-5d18-8384-c0838affc5b6'}}) 2025-07-12 13:41:28.175715 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '465622e3-903d-5505-a41f-76599f0f3897'}}) 2025-07-12 13:41:28.175725 | orchestrator | 2025-07-12 13:41:28.175736 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-07-12 13:41:28.175747 | orchestrator | Saturday 12 July 2025 13:41:23 +0000 (0:00:00.198) 0:00:58.015 ********* 2025-07-12 13:41:28.175759 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-76cf46ce-80cb-5d18-8384-c0838affc5b6', 'data_vg': 'ceph-76cf46ce-80cb-5d18-8384-c0838affc5b6'}) 2025-07-12 13:41:28.175779 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-465622e3-903d-5505-a41f-76599f0f3897', 'data_vg': 'ceph-465622e3-903d-5505-a41f-76599f0f3897'}) 2025-07-12 13:41:28.175789 | orchestrator | 2025-07-12 13:41:28.175800 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-07-12 13:41:28.175846 | orchestrator | Saturday 12 July 2025 13:41:25 +0000 (0:00:01.842) 0:00:59.858 ********* 2025-07-12 13:41:28.175859 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-76cf46ce-80cb-5d18-8384-c0838affc5b6', 'data_vg': 'ceph-76cf46ce-80cb-5d18-8384-c0838affc5b6'})  2025-07-12 13:41:28.175871 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-465622e3-903d-5505-a41f-76599f0f3897', 'data_vg': 'ceph-465622e3-903d-5505-a41f-76599f0f3897'})  2025-07-12 13:41:28.175881 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:28.175892 | orchestrator | 2025-07-12 13:41:28.175902 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-07-12 13:41:28.175913 | orchestrator | Saturday 12 July 2025 13:41:25 +0000 (0:00:00.163) 0:01:00.022 ********* 2025-07-12 13:41:28.175923 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-76cf46ce-80cb-5d18-8384-c0838affc5b6', 'data_vg': 'ceph-76cf46ce-80cb-5d18-8384-c0838affc5b6'}) 2025-07-12 13:41:28.175934 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-465622e3-903d-5505-a41f-76599f0f3897', 'data_vg': 'ceph-465622e3-903d-5505-a41f-76599f0f3897'}) 2025-07-12 13:41:28.175944 | orchestrator | 2025-07-12 13:41:28.175955 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-07-12 13:41:28.175965 | orchestrator | Saturday 12 July 2025 13:41:26 +0000 (0:00:01.294) 0:01:01.316 ********* 2025-07-12 13:41:28.175976 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-76cf46ce-80cb-5d18-8384-c0838affc5b6', 'data_vg': 'ceph-76cf46ce-80cb-5d18-8384-c0838affc5b6'})  2025-07-12 13:41:28.175986 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-465622e3-903d-5505-a41f-76599f0f3897', 'data_vg': 'ceph-465622e3-903d-5505-a41f-76599f0f3897'})  2025-07-12 13:41:28.175997 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:28.176007 | orchestrator | 2025-07-12 13:41:28.176017 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-07-12 13:41:28.176028 | orchestrator | Saturday 12 July 2025 13:41:26 +0000 (0:00:00.154) 0:01:01.471 ********* 2025-07-12 13:41:28.176038 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:28.176049 | orchestrator | 2025-07-12 13:41:28.176059 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-07-12 13:41:28.176070 | orchestrator | Saturday 12 July 2025 13:41:26 +0000 (0:00:00.137) 0:01:01.608 ********* 2025-07-12 13:41:28.176080 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-76cf46ce-80cb-5d18-8384-c0838affc5b6', 'data_vg': 'ceph-76cf46ce-80cb-5d18-8384-c0838affc5b6'})  2025-07-12 13:41:28.176091 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-465622e3-903d-5505-a41f-76599f0f3897', 'data_vg': 'ceph-465622e3-903d-5505-a41f-76599f0f3897'})  2025-07-12 13:41:28.176101 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:28.176112 | orchestrator | 2025-07-12 13:41:28.176122 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-07-12 13:41:28.176133 | orchestrator | Saturday 12 July 2025 13:41:27 +0000 (0:00:00.156) 0:01:01.764 ********* 2025-07-12 13:41:28.176143 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:28.176153 | orchestrator | 2025-07-12 13:41:28.176164 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-07-12 13:41:28.176174 | orchestrator | Saturday 12 July 2025 13:41:27 +0000 (0:00:00.141) 0:01:01.906 ********* 2025-07-12 13:41:28.176185 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-76cf46ce-80cb-5d18-8384-c0838affc5b6', 'data_vg': 'ceph-76cf46ce-80cb-5d18-8384-c0838affc5b6'})  2025-07-12 13:41:28.176196 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-465622e3-903d-5505-a41f-76599f0f3897', 'data_vg': 'ceph-465622e3-903d-5505-a41f-76599f0f3897'})  2025-07-12 13:41:28.176213 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:28.176224 | orchestrator | 2025-07-12 13:41:28.176234 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-07-12 13:41:28.176245 | orchestrator | Saturday 12 July 2025 13:41:27 +0000 (0:00:00.153) 0:01:02.059 ********* 2025-07-12 13:41:28.176255 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:28.176265 | orchestrator | 2025-07-12 13:41:28.176276 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-07-12 13:41:28.176286 | orchestrator | Saturday 12 July 2025 13:41:27 +0000 (0:00:00.133) 0:01:02.193 ********* 2025-07-12 13:41:28.176297 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-76cf46ce-80cb-5d18-8384-c0838affc5b6', 'data_vg': 'ceph-76cf46ce-80cb-5d18-8384-c0838affc5b6'})  2025-07-12 13:41:28.176307 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-465622e3-903d-5505-a41f-76599f0f3897', 'data_vg': 'ceph-465622e3-903d-5505-a41f-76599f0f3897'})  2025-07-12 13:41:28.176318 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:28.176328 | orchestrator | 2025-07-12 13:41:28.176339 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-07-12 13:41:28.176354 | orchestrator | Saturday 12 July 2025 13:41:27 +0000 (0:00:00.155) 0:01:02.348 ********* 2025-07-12 13:41:28.176365 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:41:28.176375 | orchestrator | 2025-07-12 13:41:28.176386 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-07-12 13:41:28.176398 | orchestrator | Saturday 12 July 2025 13:41:27 +0000 (0:00:00.142) 0:01:02.491 ********* 2025-07-12 13:41:28.176427 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-76cf46ce-80cb-5d18-8384-c0838affc5b6', 'data_vg': 'ceph-76cf46ce-80cb-5d18-8384-c0838affc5b6'})  2025-07-12 13:41:34.250142 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-465622e3-903d-5505-a41f-76599f0f3897', 'data_vg': 'ceph-465622e3-903d-5505-a41f-76599f0f3897'})  2025-07-12 13:41:34.250253 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:34.250269 | orchestrator | 2025-07-12 13:41:34.250282 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-07-12 13:41:34.250294 | orchestrator | Saturday 12 July 2025 13:41:28 +0000 (0:00:00.353) 0:01:02.844 ********* 2025-07-12 13:41:34.250305 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-76cf46ce-80cb-5d18-8384-c0838affc5b6', 'data_vg': 'ceph-76cf46ce-80cb-5d18-8384-c0838affc5b6'})  2025-07-12 13:41:34.250317 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-465622e3-903d-5505-a41f-76599f0f3897', 'data_vg': 'ceph-465622e3-903d-5505-a41f-76599f0f3897'})  2025-07-12 13:41:34.250327 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:34.250338 | orchestrator | 2025-07-12 13:41:34.250349 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-07-12 13:41:34.250360 | orchestrator | Saturday 12 July 2025 13:41:28 +0000 (0:00:00.159) 0:01:03.004 ********* 2025-07-12 13:41:34.250370 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-76cf46ce-80cb-5d18-8384-c0838affc5b6', 'data_vg': 'ceph-76cf46ce-80cb-5d18-8384-c0838affc5b6'})  2025-07-12 13:41:34.250381 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-465622e3-903d-5505-a41f-76599f0f3897', 'data_vg': 'ceph-465622e3-903d-5505-a41f-76599f0f3897'})  2025-07-12 13:41:34.250392 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:34.250403 | orchestrator | 2025-07-12 13:41:34.250414 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-07-12 13:41:34.250424 | orchestrator | Saturday 12 July 2025 13:41:28 +0000 (0:00:00.147) 0:01:03.151 ********* 2025-07-12 13:41:34.250435 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:34.250446 | orchestrator | 2025-07-12 13:41:34.250457 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-07-12 13:41:34.250489 | orchestrator | Saturday 12 July 2025 13:41:28 +0000 (0:00:00.140) 0:01:03.292 ********* 2025-07-12 13:41:34.250561 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:34.250583 | orchestrator | 2025-07-12 13:41:34.250601 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-07-12 13:41:34.250618 | orchestrator | Saturday 12 July 2025 13:41:28 +0000 (0:00:00.134) 0:01:03.427 ********* 2025-07-12 13:41:34.250629 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:34.250639 | orchestrator | 2025-07-12 13:41:34.250650 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-07-12 13:41:34.250661 | orchestrator | Saturday 12 July 2025 13:41:28 +0000 (0:00:00.135) 0:01:03.563 ********* 2025-07-12 13:41:34.250671 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 13:41:34.250683 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-07-12 13:41:34.250694 | orchestrator | } 2025-07-12 13:41:34.250704 | orchestrator | 2025-07-12 13:41:34.250715 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-07-12 13:41:34.250725 | orchestrator | Saturday 12 July 2025 13:41:29 +0000 (0:00:00.174) 0:01:03.737 ********* 2025-07-12 13:41:34.250736 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 13:41:34.250746 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-07-12 13:41:34.250756 | orchestrator | } 2025-07-12 13:41:34.250767 | orchestrator | 2025-07-12 13:41:34.250777 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-07-12 13:41:34.250788 | orchestrator | Saturday 12 July 2025 13:41:29 +0000 (0:00:00.147) 0:01:03.885 ********* 2025-07-12 13:41:34.250798 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 13:41:34.250808 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-07-12 13:41:34.250819 | orchestrator | } 2025-07-12 13:41:34.250829 | orchestrator | 2025-07-12 13:41:34.250840 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-07-12 13:41:34.250850 | orchestrator | Saturday 12 July 2025 13:41:29 +0000 (0:00:00.148) 0:01:04.033 ********* 2025-07-12 13:41:34.250861 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:41:34.250871 | orchestrator | 2025-07-12 13:41:34.250882 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-07-12 13:41:34.250893 | orchestrator | Saturday 12 July 2025 13:41:29 +0000 (0:00:00.504) 0:01:04.537 ********* 2025-07-12 13:41:34.250904 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:41:34.250914 | orchestrator | 2025-07-12 13:41:34.250925 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-07-12 13:41:34.250935 | orchestrator | Saturday 12 July 2025 13:41:30 +0000 (0:00:00.503) 0:01:05.041 ********* 2025-07-12 13:41:34.250946 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:41:34.250956 | orchestrator | 2025-07-12 13:41:34.250966 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-07-12 13:41:34.250977 | orchestrator | Saturday 12 July 2025 13:41:30 +0000 (0:00:00.523) 0:01:05.565 ********* 2025-07-12 13:41:34.250987 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:41:34.250998 | orchestrator | 2025-07-12 13:41:34.251008 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-07-12 13:41:34.251018 | orchestrator | Saturday 12 July 2025 13:41:31 +0000 (0:00:00.364) 0:01:05.929 ********* 2025-07-12 13:41:34.251029 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:34.251039 | orchestrator | 2025-07-12 13:41:34.251065 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-07-12 13:41:34.251076 | orchestrator | Saturday 12 July 2025 13:41:31 +0000 (0:00:00.130) 0:01:06.060 ********* 2025-07-12 13:41:34.251086 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:34.251097 | orchestrator | 2025-07-12 13:41:34.251108 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-07-12 13:41:34.251118 | orchestrator | Saturday 12 July 2025 13:41:31 +0000 (0:00:00.111) 0:01:06.171 ********* 2025-07-12 13:41:34.251129 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 13:41:34.251139 | orchestrator |  "vgs_report": { 2025-07-12 13:41:34.251159 | orchestrator |  "vg": [] 2025-07-12 13:41:34.251187 | orchestrator |  } 2025-07-12 13:41:34.251198 | orchestrator | } 2025-07-12 13:41:34.251209 | orchestrator | 2025-07-12 13:41:34.251219 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-07-12 13:41:34.251230 | orchestrator | Saturday 12 July 2025 13:41:31 +0000 (0:00:00.162) 0:01:06.333 ********* 2025-07-12 13:41:34.251240 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:34.251251 | orchestrator | 2025-07-12 13:41:34.251261 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-07-12 13:41:34.251272 | orchestrator | Saturday 12 July 2025 13:41:31 +0000 (0:00:00.137) 0:01:06.471 ********* 2025-07-12 13:41:34.251283 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:34.251293 | orchestrator | 2025-07-12 13:41:34.251304 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-07-12 13:41:34.251315 | orchestrator | Saturday 12 July 2025 13:41:31 +0000 (0:00:00.145) 0:01:06.616 ********* 2025-07-12 13:41:34.251325 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:34.251335 | orchestrator | 2025-07-12 13:41:34.251346 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-07-12 13:41:34.251356 | orchestrator | Saturday 12 July 2025 13:41:32 +0000 (0:00:00.133) 0:01:06.750 ********* 2025-07-12 13:41:34.251367 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:34.251377 | orchestrator | 2025-07-12 13:41:34.251388 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-07-12 13:41:34.251398 | orchestrator | Saturday 12 July 2025 13:41:32 +0000 (0:00:00.131) 0:01:06.882 ********* 2025-07-12 13:41:34.251409 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:34.251419 | orchestrator | 2025-07-12 13:41:34.251430 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-07-12 13:41:34.251440 | orchestrator | Saturday 12 July 2025 13:41:32 +0000 (0:00:00.132) 0:01:07.014 ********* 2025-07-12 13:41:34.251451 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:34.251461 | orchestrator | 2025-07-12 13:41:34.251471 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-07-12 13:41:34.251482 | orchestrator | Saturday 12 July 2025 13:41:32 +0000 (0:00:00.132) 0:01:07.147 ********* 2025-07-12 13:41:34.251492 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:34.251532 | orchestrator | 2025-07-12 13:41:34.251543 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-07-12 13:41:34.251554 | orchestrator | Saturday 12 July 2025 13:41:32 +0000 (0:00:00.129) 0:01:07.276 ********* 2025-07-12 13:41:34.251564 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:34.251574 | orchestrator | 2025-07-12 13:41:34.251585 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-07-12 13:41:34.251596 | orchestrator | Saturday 12 July 2025 13:41:32 +0000 (0:00:00.137) 0:01:07.414 ********* 2025-07-12 13:41:34.251606 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:34.251617 | orchestrator | 2025-07-12 13:41:34.251627 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-07-12 13:41:34.251638 | orchestrator | Saturday 12 July 2025 13:41:33 +0000 (0:00:00.331) 0:01:07.745 ********* 2025-07-12 13:41:34.251648 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:34.251659 | orchestrator | 2025-07-12 13:41:34.251669 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-07-12 13:41:34.251680 | orchestrator | Saturday 12 July 2025 13:41:33 +0000 (0:00:00.149) 0:01:07.895 ********* 2025-07-12 13:41:34.251690 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:34.251701 | orchestrator | 2025-07-12 13:41:34.251711 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-07-12 13:41:34.251722 | orchestrator | Saturday 12 July 2025 13:41:33 +0000 (0:00:00.117) 0:01:08.013 ********* 2025-07-12 13:41:34.251732 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:34.251743 | orchestrator | 2025-07-12 13:41:34.251753 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-07-12 13:41:34.251764 | orchestrator | Saturday 12 July 2025 13:41:33 +0000 (0:00:00.147) 0:01:08.160 ********* 2025-07-12 13:41:34.251782 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:34.251792 | orchestrator | 2025-07-12 13:41:34.251803 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-07-12 13:41:34.251814 | orchestrator | Saturday 12 July 2025 13:41:33 +0000 (0:00:00.146) 0:01:08.307 ********* 2025-07-12 13:41:34.251824 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:34.251835 | orchestrator | 2025-07-12 13:41:34.251845 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-07-12 13:41:34.251856 | orchestrator | Saturday 12 July 2025 13:41:33 +0000 (0:00:00.143) 0:01:08.450 ********* 2025-07-12 13:41:34.251867 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-76cf46ce-80cb-5d18-8384-c0838affc5b6', 'data_vg': 'ceph-76cf46ce-80cb-5d18-8384-c0838affc5b6'})  2025-07-12 13:41:34.251878 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-465622e3-903d-5505-a41f-76599f0f3897', 'data_vg': 'ceph-465622e3-903d-5505-a41f-76599f0f3897'})  2025-07-12 13:41:34.251888 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:34.251899 | orchestrator | 2025-07-12 13:41:34.251910 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-07-12 13:41:34.251920 | orchestrator | Saturday 12 July 2025 13:41:33 +0000 (0:00:00.153) 0:01:08.604 ********* 2025-07-12 13:41:34.251936 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-76cf46ce-80cb-5d18-8384-c0838affc5b6', 'data_vg': 'ceph-76cf46ce-80cb-5d18-8384-c0838affc5b6'})  2025-07-12 13:41:34.251947 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-465622e3-903d-5505-a41f-76599f0f3897', 'data_vg': 'ceph-465622e3-903d-5505-a41f-76599f0f3897'})  2025-07-12 13:41:34.251958 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:34.251968 | orchestrator | 2025-07-12 13:41:34.251979 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-07-12 13:41:34.251990 | orchestrator | Saturday 12 July 2025 13:41:34 +0000 (0:00:00.159) 0:01:08.763 ********* 2025-07-12 13:41:34.252008 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-76cf46ce-80cb-5d18-8384-c0838affc5b6', 'data_vg': 'ceph-76cf46ce-80cb-5d18-8384-c0838affc5b6'})  2025-07-12 13:41:37.229019 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-465622e3-903d-5505-a41f-76599f0f3897', 'data_vg': 'ceph-465622e3-903d-5505-a41f-76599f0f3897'})  2025-07-12 13:41:37.229148 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:37.229166 | orchestrator | 2025-07-12 13:41:37.229180 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-07-12 13:41:37.229901 | orchestrator | Saturday 12 July 2025 13:41:34 +0000 (0:00:00.155) 0:01:08.919 ********* 2025-07-12 13:41:37.229921 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-76cf46ce-80cb-5d18-8384-c0838affc5b6', 'data_vg': 'ceph-76cf46ce-80cb-5d18-8384-c0838affc5b6'})  2025-07-12 13:41:37.229934 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-465622e3-903d-5505-a41f-76599f0f3897', 'data_vg': 'ceph-465622e3-903d-5505-a41f-76599f0f3897'})  2025-07-12 13:41:37.229945 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:37.229956 | orchestrator | 2025-07-12 13:41:37.229967 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-07-12 13:41:37.229977 | orchestrator | Saturday 12 July 2025 13:41:34 +0000 (0:00:00.151) 0:01:09.071 ********* 2025-07-12 13:41:37.229988 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-76cf46ce-80cb-5d18-8384-c0838affc5b6', 'data_vg': 'ceph-76cf46ce-80cb-5d18-8384-c0838affc5b6'})  2025-07-12 13:41:37.229999 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-465622e3-903d-5505-a41f-76599f0f3897', 'data_vg': 'ceph-465622e3-903d-5505-a41f-76599f0f3897'})  2025-07-12 13:41:37.230009 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:37.230057 | orchestrator | 2025-07-12 13:41:37.230068 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-07-12 13:41:37.230102 | orchestrator | Saturday 12 July 2025 13:41:34 +0000 (0:00:00.148) 0:01:09.220 ********* 2025-07-12 13:41:37.230113 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-76cf46ce-80cb-5d18-8384-c0838affc5b6', 'data_vg': 'ceph-76cf46ce-80cb-5d18-8384-c0838affc5b6'})  2025-07-12 13:41:37.230124 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-465622e3-903d-5505-a41f-76599f0f3897', 'data_vg': 'ceph-465622e3-903d-5505-a41f-76599f0f3897'})  2025-07-12 13:41:37.230135 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:37.230146 | orchestrator | 2025-07-12 13:41:37.230156 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-07-12 13:41:37.230167 | orchestrator | Saturday 12 July 2025 13:41:34 +0000 (0:00:00.154) 0:01:09.374 ********* 2025-07-12 13:41:37.230177 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-76cf46ce-80cb-5d18-8384-c0838affc5b6', 'data_vg': 'ceph-76cf46ce-80cb-5d18-8384-c0838affc5b6'})  2025-07-12 13:41:37.230188 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-465622e3-903d-5505-a41f-76599f0f3897', 'data_vg': 'ceph-465622e3-903d-5505-a41f-76599f0f3897'})  2025-07-12 13:41:37.230199 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:37.230209 | orchestrator | 2025-07-12 13:41:37.230220 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-07-12 13:41:37.230230 | orchestrator | Saturday 12 July 2025 13:41:35 +0000 (0:00:00.357) 0:01:09.732 ********* 2025-07-12 13:41:37.230240 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-76cf46ce-80cb-5d18-8384-c0838affc5b6', 'data_vg': 'ceph-76cf46ce-80cb-5d18-8384-c0838affc5b6'})  2025-07-12 13:41:37.230251 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-465622e3-903d-5505-a41f-76599f0f3897', 'data_vg': 'ceph-465622e3-903d-5505-a41f-76599f0f3897'})  2025-07-12 13:41:37.230262 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:37.230272 | orchestrator | 2025-07-12 13:41:37.230283 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-07-12 13:41:37.230293 | orchestrator | Saturday 12 July 2025 13:41:35 +0000 (0:00:00.163) 0:01:09.896 ********* 2025-07-12 13:41:37.230304 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:41:37.230315 | orchestrator | 2025-07-12 13:41:37.230326 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-07-12 13:41:37.230336 | orchestrator | Saturday 12 July 2025 13:41:35 +0000 (0:00:00.509) 0:01:10.405 ********* 2025-07-12 13:41:37.230347 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:41:37.230357 | orchestrator | 2025-07-12 13:41:37.230367 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-07-12 13:41:37.230378 | orchestrator | Saturday 12 July 2025 13:41:36 +0000 (0:00:00.509) 0:01:10.915 ********* 2025-07-12 13:41:37.230388 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:41:37.230398 | orchestrator | 2025-07-12 13:41:37.230409 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-07-12 13:41:37.230420 | orchestrator | Saturday 12 July 2025 13:41:36 +0000 (0:00:00.146) 0:01:11.062 ********* 2025-07-12 13:41:37.230430 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-465622e3-903d-5505-a41f-76599f0f3897', 'vg_name': 'ceph-465622e3-903d-5505-a41f-76599f0f3897'}) 2025-07-12 13:41:37.230442 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-76cf46ce-80cb-5d18-8384-c0838affc5b6', 'vg_name': 'ceph-76cf46ce-80cb-5d18-8384-c0838affc5b6'}) 2025-07-12 13:41:37.230452 | orchestrator | 2025-07-12 13:41:37.230463 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-07-12 13:41:37.230473 | orchestrator | Saturday 12 July 2025 13:41:36 +0000 (0:00:00.174) 0:01:11.236 ********* 2025-07-12 13:41:37.230522 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-76cf46ce-80cb-5d18-8384-c0838affc5b6', 'data_vg': 'ceph-76cf46ce-80cb-5d18-8384-c0838affc5b6'})  2025-07-12 13:41:37.230534 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-465622e3-903d-5505-a41f-76599f0f3897', 'data_vg': 'ceph-465622e3-903d-5505-a41f-76599f0f3897'})  2025-07-12 13:41:37.230553 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:37.230564 | orchestrator | 2025-07-12 13:41:37.230574 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-07-12 13:41:37.230585 | orchestrator | Saturday 12 July 2025 13:41:36 +0000 (0:00:00.147) 0:01:11.384 ********* 2025-07-12 13:41:37.230616 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-76cf46ce-80cb-5d18-8384-c0838affc5b6', 'data_vg': 'ceph-76cf46ce-80cb-5d18-8384-c0838affc5b6'})  2025-07-12 13:41:37.230627 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-465622e3-903d-5505-a41f-76599f0f3897', 'data_vg': 'ceph-465622e3-903d-5505-a41f-76599f0f3897'})  2025-07-12 13:41:37.230638 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:37.230648 | orchestrator | 2025-07-12 13:41:37.230659 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-07-12 13:41:37.230669 | orchestrator | Saturday 12 July 2025 13:41:36 +0000 (0:00:00.155) 0:01:11.540 ********* 2025-07-12 13:41:37.230680 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-76cf46ce-80cb-5d18-8384-c0838affc5b6', 'data_vg': 'ceph-76cf46ce-80cb-5d18-8384-c0838affc5b6'})  2025-07-12 13:41:37.230691 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-465622e3-903d-5505-a41f-76599f0f3897', 'data_vg': 'ceph-465622e3-903d-5505-a41f-76599f0f3897'})  2025-07-12 13:41:37.230702 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:37.230712 | orchestrator | 2025-07-12 13:41:37.230722 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-07-12 13:41:37.230733 | orchestrator | Saturday 12 July 2025 13:41:37 +0000 (0:00:00.170) 0:01:11.710 ********* 2025-07-12 13:41:37.230743 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 13:41:37.230754 | orchestrator |  "lvm_report": { 2025-07-12 13:41:37.230765 | orchestrator |  "lv": [ 2025-07-12 13:41:37.230776 | orchestrator |  { 2025-07-12 13:41:37.230787 | orchestrator |  "lv_name": "osd-block-465622e3-903d-5505-a41f-76599f0f3897", 2025-07-12 13:41:37.230798 | orchestrator |  "vg_name": "ceph-465622e3-903d-5505-a41f-76599f0f3897" 2025-07-12 13:41:37.230808 | orchestrator |  }, 2025-07-12 13:41:37.230818 | orchestrator |  { 2025-07-12 13:41:37.230829 | orchestrator |  "lv_name": "osd-block-76cf46ce-80cb-5d18-8384-c0838affc5b6", 2025-07-12 13:41:37.230839 | orchestrator |  "vg_name": "ceph-76cf46ce-80cb-5d18-8384-c0838affc5b6" 2025-07-12 13:41:37.230849 | orchestrator |  } 2025-07-12 13:41:37.230860 | orchestrator |  ], 2025-07-12 13:41:37.230870 | orchestrator |  "pv": [ 2025-07-12 13:41:37.230881 | orchestrator |  { 2025-07-12 13:41:37.230891 | orchestrator |  "pv_name": "/dev/sdb", 2025-07-12 13:41:37.230901 | orchestrator |  "vg_name": "ceph-76cf46ce-80cb-5d18-8384-c0838affc5b6" 2025-07-12 13:41:37.230912 | orchestrator |  }, 2025-07-12 13:41:37.230922 | orchestrator |  { 2025-07-12 13:41:37.230932 | orchestrator |  "pv_name": "/dev/sdc", 2025-07-12 13:41:37.230943 | orchestrator |  "vg_name": "ceph-465622e3-903d-5505-a41f-76599f0f3897" 2025-07-12 13:41:37.230953 | orchestrator |  } 2025-07-12 13:41:37.230964 | orchestrator |  ] 2025-07-12 13:41:37.230974 | orchestrator |  } 2025-07-12 13:41:37.230984 | orchestrator | } 2025-07-12 13:41:37.230995 | orchestrator | 2025-07-12 13:41:37.231006 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:41:37.231016 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-07-12 13:41:37.231027 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-07-12 13:41:37.231037 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-07-12 13:41:37.231055 | orchestrator | 2025-07-12 13:41:37.231065 | orchestrator | 2025-07-12 13:41:37.231076 | orchestrator | 2025-07-12 13:41:37.231086 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:41:37.231097 | orchestrator | Saturday 12 July 2025 13:41:37 +0000 (0:00:00.166) 0:01:11.876 ********* 2025-07-12 13:41:37.231107 | orchestrator | =============================================================================== 2025-07-12 13:41:37.231118 | orchestrator | Create block VGs -------------------------------------------------------- 5.66s 2025-07-12 13:41:37.231134 | orchestrator | Create block LVs -------------------------------------------------------- 4.09s 2025-07-12 13:41:37.231144 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.87s 2025-07-12 13:41:37.231155 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.56s 2025-07-12 13:41:37.231165 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.54s 2025-07-12 13:41:37.231176 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.54s 2025-07-12 13:41:37.231186 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.53s 2025-07-12 13:41:37.231197 | orchestrator | Add known partitions to the list of available block devices ------------- 1.48s 2025-07-12 13:41:37.231214 | orchestrator | Add known links to the list of available block devices ------------------ 1.24s 2025-07-12 13:41:37.632262 | orchestrator | Add known partitions to the list of available block devices ------------- 1.07s 2025-07-12 13:41:37.632385 | orchestrator | Print LVM report data --------------------------------------------------- 0.97s 2025-07-12 13:41:37.632409 | orchestrator | Add known partitions to the list of available block devices ------------- 0.82s 2025-07-12 13:41:37.632428 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.78s 2025-07-12 13:41:37.632454 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2025-07-12 13:41:37.632480 | orchestrator | Get initial list of available block devices ----------------------------- 0.71s 2025-07-12 13:41:37.632578 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.68s 2025-07-12 13:41:37.632597 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.68s 2025-07-12 13:41:37.632613 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.67s 2025-07-12 13:41:37.632631 | orchestrator | Combine JSON from _db/wal/db_wal_vgs_cmd_output ------------------------- 0.67s 2025-07-12 13:41:37.632649 | orchestrator | Print 'Create DB LVs for ceph_db_devices' ------------------------------- 0.66s 2025-07-12 13:41:49.955102 | orchestrator | 2025-07-12 13:41:49 | INFO  | Task 366a2a91-d94a-4758-8d49-29a5549f5f20 (facts) was prepared for execution. 2025-07-12 13:41:49.955229 | orchestrator | 2025-07-12 13:41:49 | INFO  | It takes a moment until task 366a2a91-d94a-4758-8d49-29a5549f5f20 (facts) has been started and output is visible here. 2025-07-12 13:42:03.103557 | orchestrator | 2025-07-12 13:42:03.103676 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-07-12 13:42:03.103691 | orchestrator | 2025-07-12 13:42:03.103704 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-12 13:42:03.103717 | orchestrator | Saturday 12 July 2025 13:41:53 +0000 (0:00:00.278) 0:00:00.278 ********* 2025-07-12 13:42:03.103728 | orchestrator | ok: [testbed-manager] 2025-07-12 13:42:03.103741 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:42:03.103752 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:42:03.103763 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:42:03.103774 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:42:03.103785 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:42:03.103796 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:42:03.103807 | orchestrator | 2025-07-12 13:42:03.103818 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-12 13:42:03.103829 | orchestrator | Saturday 12 July 2025 13:41:55 +0000 (0:00:01.090) 0:00:01.369 ********* 2025-07-12 13:42:03.103868 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:42:03.103881 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:42:03.103891 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:42:03.103902 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:42:03.103913 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:42:03.103923 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:42:03.103934 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:42:03.103945 | orchestrator | 2025-07-12 13:42:03.103956 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-12 13:42:03.103967 | orchestrator | 2025-07-12 13:42:03.103978 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 13:42:03.103988 | orchestrator | Saturday 12 July 2025 13:41:56 +0000 (0:00:01.262) 0:00:02.631 ********* 2025-07-12 13:42:03.104000 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:42:03.104011 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:42:03.104022 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:42:03.104032 | orchestrator | ok: [testbed-manager] 2025-07-12 13:42:03.104043 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:42:03.104054 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:42:03.104066 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:42:03.104079 | orchestrator | 2025-07-12 13:42:03.104093 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-12 13:42:03.104105 | orchestrator | 2025-07-12 13:42:03.104118 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-12 13:42:03.104130 | orchestrator | Saturday 12 July 2025 13:42:02 +0000 (0:00:05.772) 0:00:08.404 ********* 2025-07-12 13:42:03.104143 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:42:03.104155 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:42:03.104168 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:42:03.104181 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:42:03.104193 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:42:03.104206 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:42:03.104218 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:42:03.104230 | orchestrator | 2025-07-12 13:42:03.104242 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:42:03.104256 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:42:03.104269 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:42:03.104297 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:42:03.104310 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:42:03.104323 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:42:03.104335 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:42:03.104347 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:42:03.104358 | orchestrator | 2025-07-12 13:42:03.104369 | orchestrator | 2025-07-12 13:42:03.104380 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:42:03.104391 | orchestrator | Saturday 12 July 2025 13:42:02 +0000 (0:00:00.611) 0:00:09.016 ********* 2025-07-12 13:42:03.104402 | orchestrator | =============================================================================== 2025-07-12 13:42:03.104413 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.77s 2025-07-12 13:42:03.104457 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.26s 2025-07-12 13:42:03.104470 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.09s 2025-07-12 13:42:03.104481 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.61s 2025-07-12 13:42:03.396890 | orchestrator | 2025-07-12 13:42:03.400368 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Jul 12 13:42:03 UTC 2025 2025-07-12 13:42:03.400399 | orchestrator | 2025-07-12 13:42:05.143987 | orchestrator | 2025-07-12 13:42:05 | INFO  | Collection nutshell is prepared for execution 2025-07-12 13:42:05.144086 | orchestrator | 2025-07-12 13:42:05 | INFO  | D [0] - dotfiles 2025-07-12 13:42:15.234837 | orchestrator | 2025-07-12 13:42:15 | INFO  | D [0] - homer 2025-07-12 13:42:15.234999 | orchestrator | 2025-07-12 13:42:15 | INFO  | D [0] - netdata 2025-07-12 13:42:15.235018 | orchestrator | 2025-07-12 13:42:15 | INFO  | D [0] - openstackclient 2025-07-12 13:42:15.235030 | orchestrator | 2025-07-12 13:42:15 | INFO  | D [0] - phpmyadmin 2025-07-12 13:42:15.235041 | orchestrator | 2025-07-12 13:42:15 | INFO  | A [0] - common 2025-07-12 13:42:15.237362 | orchestrator | 2025-07-12 13:42:15 | INFO  | A [1] -- loadbalancer 2025-07-12 13:42:15.237396 | orchestrator | 2025-07-12 13:42:15 | INFO  | D [2] --- opensearch 2025-07-12 13:42:15.237576 | orchestrator | 2025-07-12 13:42:15 | INFO  | A [2] --- mariadb-ng 2025-07-12 13:42:15.237716 | orchestrator | 2025-07-12 13:42:15 | INFO  | D [3] ---- horizon 2025-07-12 13:42:15.238079 | orchestrator | 2025-07-12 13:42:15 | INFO  | A [3] ---- keystone 2025-07-12 13:42:15.238107 | orchestrator | 2025-07-12 13:42:15 | INFO  | A [4] ----- neutron 2025-07-12 13:42:15.238519 | orchestrator | 2025-07-12 13:42:15 | INFO  | D [5] ------ wait-for-nova 2025-07-12 13:42:15.238717 | orchestrator | 2025-07-12 13:42:15 | INFO  | A [5] ------ octavia 2025-07-12 13:42:15.240298 | orchestrator | 2025-07-12 13:42:15 | INFO  | D [4] ----- barbican 2025-07-12 13:42:15.240324 | orchestrator | 2025-07-12 13:42:15 | INFO  | D [4] ----- designate 2025-07-12 13:42:15.240336 | orchestrator | 2025-07-12 13:42:15 | INFO  | D [4] ----- ironic 2025-07-12 13:42:15.240532 | orchestrator | 2025-07-12 13:42:15 | INFO  | D [4] ----- placement 2025-07-12 13:42:15.240555 | orchestrator | 2025-07-12 13:42:15 | INFO  | D [4] ----- magnum 2025-07-12 13:42:15.240977 | orchestrator | 2025-07-12 13:42:15 | INFO  | A [1] -- openvswitch 2025-07-12 13:42:15.241596 | orchestrator | 2025-07-12 13:42:15 | INFO  | D [2] --- ovn 2025-07-12 13:42:15.241690 | orchestrator | 2025-07-12 13:42:15 | INFO  | D [1] -- memcached 2025-07-12 13:42:15.241715 | orchestrator | 2025-07-12 13:42:15 | INFO  | D [1] -- redis 2025-07-12 13:42:15.241832 | orchestrator | 2025-07-12 13:42:15 | INFO  | D [1] -- rabbitmq-ng 2025-07-12 13:42:15.241855 | orchestrator | 2025-07-12 13:42:15 | INFO  | A [0] - kubernetes 2025-07-12 13:42:15.244863 | orchestrator | 2025-07-12 13:42:15 | INFO  | D [1] -- kubeconfig 2025-07-12 13:42:15.244918 | orchestrator | 2025-07-12 13:42:15 | INFO  | A [1] -- copy-kubeconfig 2025-07-12 13:42:15.244933 | orchestrator | 2025-07-12 13:42:15 | INFO  | A [0] - ceph 2025-07-12 13:42:15.246960 | orchestrator | 2025-07-12 13:42:15 | INFO  | A [1] -- ceph-pools 2025-07-12 13:42:15.246987 | orchestrator | 2025-07-12 13:42:15 | INFO  | A [2] --- copy-ceph-keys 2025-07-12 13:42:15.246999 | orchestrator | 2025-07-12 13:42:15 | INFO  | A [3] ---- cephclient 2025-07-12 13:42:15.247010 | orchestrator | 2025-07-12 13:42:15 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-07-12 13:42:15.247055 | orchestrator | 2025-07-12 13:42:15 | INFO  | A [4] ----- wait-for-keystone 2025-07-12 13:42:15.247606 | orchestrator | 2025-07-12 13:42:15 | INFO  | D [5] ------ kolla-ceph-rgw 2025-07-12 13:42:15.247633 | orchestrator | 2025-07-12 13:42:15 | INFO  | D [5] ------ glance 2025-07-12 13:42:15.247645 | orchestrator | 2025-07-12 13:42:15 | INFO  | D [5] ------ cinder 2025-07-12 13:42:15.247662 | orchestrator | 2025-07-12 13:42:15 | INFO  | D [5] ------ nova 2025-07-12 13:42:15.247673 | orchestrator | 2025-07-12 13:42:15 | INFO  | A [4] ----- prometheus 2025-07-12 13:42:15.247987 | orchestrator | 2025-07-12 13:42:15 | INFO  | D [5] ------ grafana 2025-07-12 13:42:15.458940 | orchestrator | 2025-07-12 13:42:15 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-07-12 13:42:15.459009 | orchestrator | 2025-07-12 13:42:15 | INFO  | Tasks are running in the background 2025-07-12 13:42:18.510098 | orchestrator | 2025-07-12 13:42:18 | INFO  | No task IDs specified, wait for all currently running tasks 2025-07-12 13:42:20.638831 | orchestrator | 2025-07-12 13:42:20 | INFO  | Task e8baf052-5919-413d-a7de-52269672be2e is in state STARTED 2025-07-12 13:42:20.638954 | orchestrator | 2025-07-12 13:42:20 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:42:20.640108 | orchestrator | 2025-07-12 13:42:20 | INFO  | Task 7b97b5ff-2bd2-4cce-b57b-5008f4b37784 is in state STARTED 2025-07-12 13:42:20.640745 | orchestrator | 2025-07-12 13:42:20 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:42:20.642988 | orchestrator | 2025-07-12 13:42:20 | INFO  | Task 67095fa2-1492-4ac1-ac5f-6034a1d25884 is in state STARTED 2025-07-12 13:42:20.643692 | orchestrator | 2025-07-12 13:42:20 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:42:20.644371 | orchestrator | 2025-07-12 13:42:20 | INFO  | Task 1b90b84f-8e44-4526-a386-268f4894b381 is in state STARTED 2025-07-12 13:42:20.644435 | orchestrator | 2025-07-12 13:42:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:42:23.691899 | orchestrator | 2025-07-12 13:42:23 | INFO  | Task e8baf052-5919-413d-a7de-52269672be2e is in state STARTED 2025-07-12 13:42:23.692102 | orchestrator | 2025-07-12 13:42:23 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:42:23.692602 | orchestrator | 2025-07-12 13:42:23 | INFO  | Task 7b97b5ff-2bd2-4cce-b57b-5008f4b37784 is in state STARTED 2025-07-12 13:42:23.693062 | orchestrator | 2025-07-12 13:42:23 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:42:23.693837 | orchestrator | 2025-07-12 13:42:23 | INFO  | Task 67095fa2-1492-4ac1-ac5f-6034a1d25884 is in state STARTED 2025-07-12 13:42:23.694539 | orchestrator | 2025-07-12 13:42:23 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:42:23.695265 | orchestrator | 2025-07-12 13:42:23 | INFO  | Task 1b90b84f-8e44-4526-a386-268f4894b381 is in state STARTED 2025-07-12 13:42:23.695288 | orchestrator | 2025-07-12 13:42:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:42:26.739034 | orchestrator | 2025-07-12 13:42:26 | INFO  | Task e8baf052-5919-413d-a7de-52269672be2e is in state STARTED 2025-07-12 13:42:26.739162 | orchestrator | 2025-07-12 13:42:26 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:42:26.739824 | orchestrator | 2025-07-12 13:42:26 | INFO  | Task 7b97b5ff-2bd2-4cce-b57b-5008f4b37784 is in state STARTED 2025-07-12 13:42:26.740301 | orchestrator | 2025-07-12 13:42:26 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:42:26.741039 | orchestrator | 2025-07-12 13:42:26 | INFO  | Task 67095fa2-1492-4ac1-ac5f-6034a1d25884 is in state STARTED 2025-07-12 13:42:26.741517 | orchestrator | 2025-07-12 13:42:26 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:42:26.742159 | orchestrator | 2025-07-12 13:42:26 | INFO  | Task 1b90b84f-8e44-4526-a386-268f4894b381 is in state STARTED 2025-07-12 13:42:26.742184 | orchestrator | 2025-07-12 13:42:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:42:29.820035 | orchestrator | 2025-07-12 13:42:29 | INFO  | Task e8baf052-5919-413d-a7de-52269672be2e is in state STARTED 2025-07-12 13:42:29.820132 | orchestrator | 2025-07-12 13:42:29 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:42:29.820148 | orchestrator | 2025-07-12 13:42:29 | INFO  | Task 7b97b5ff-2bd2-4cce-b57b-5008f4b37784 is in state STARTED 2025-07-12 13:42:29.820161 | orchestrator | 2025-07-12 13:42:29 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:42:29.820173 | orchestrator | 2025-07-12 13:42:29 | INFO  | Task 67095fa2-1492-4ac1-ac5f-6034a1d25884 is in state STARTED 2025-07-12 13:42:29.824987 | orchestrator | 2025-07-12 13:42:29 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:42:29.825027 | orchestrator | 2025-07-12 13:42:29 | INFO  | Task 1b90b84f-8e44-4526-a386-268f4894b381 is in state STARTED 2025-07-12 13:42:29.825039 | orchestrator | 2025-07-12 13:42:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:42:32.877277 | orchestrator | 2025-07-12 13:42:32 | INFO  | Task e8baf052-5919-413d-a7de-52269672be2e is in state STARTED 2025-07-12 13:42:32.877436 | orchestrator | 2025-07-12 13:42:32 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:42:32.877664 | orchestrator | 2025-07-12 13:42:32 | INFO  | Task 7b97b5ff-2bd2-4cce-b57b-5008f4b37784 is in state STARTED 2025-07-12 13:42:32.881839 | orchestrator | 2025-07-12 13:42:32 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:42:32.882486 | orchestrator | 2025-07-12 13:42:32 | INFO  | Task 67095fa2-1492-4ac1-ac5f-6034a1d25884 is in state STARTED 2025-07-12 13:42:32.883452 | orchestrator | 2025-07-12 13:42:32 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:42:32.885885 | orchestrator | 2025-07-12 13:42:32 | INFO  | Task 1b90b84f-8e44-4526-a386-268f4894b381 is in state STARTED 2025-07-12 13:42:32.885912 | orchestrator | 2025-07-12 13:42:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:42:35.936135 | orchestrator | 2025-07-12 13:42:35 | INFO  | Task e8baf052-5919-413d-a7de-52269672be2e is in state STARTED 2025-07-12 13:42:35.936245 | orchestrator | 2025-07-12 13:42:35 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:42:35.936269 | orchestrator | 2025-07-12 13:42:35 | INFO  | Task 7b97b5ff-2bd2-4cce-b57b-5008f4b37784 is in state STARTED 2025-07-12 13:42:35.936288 | orchestrator | 2025-07-12 13:42:35 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:42:35.936778 | orchestrator | 2025-07-12 13:42:35 | INFO  | Task 67095fa2-1492-4ac1-ac5f-6034a1d25884 is in state STARTED 2025-07-12 13:42:35.938068 | orchestrator | 2025-07-12 13:42:35 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:42:35.938613 | orchestrator | 2025-07-12 13:42:35 | INFO  | Task 1b90b84f-8e44-4526-a386-268f4894b381 is in state STARTED 2025-07-12 13:42:35.938638 | orchestrator | 2025-07-12 13:42:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:42:39.013930 | orchestrator | 2025-07-12 13:42:39 | INFO  | Task e8baf052-5919-413d-a7de-52269672be2e is in state STARTED 2025-07-12 13:42:39.017484 | orchestrator | 2025-07-12 13:42:39 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:42:39.018132 | orchestrator | 2025-07-12 13:42:39 | INFO  | Task 7b97b5ff-2bd2-4cce-b57b-5008f4b37784 is in state STARTED 2025-07-12 13:42:39.020286 | orchestrator | 2025-07-12 13:42:39 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:42:39.023636 | orchestrator | 2025-07-12 13:42:39 | INFO  | Task 67095fa2-1492-4ac1-ac5f-6034a1d25884 is in state STARTED 2025-07-12 13:42:39.025181 | orchestrator | 2025-07-12 13:42:39 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:42:39.026121 | orchestrator | 2025-07-12 13:42:39 | INFO  | Task 1b90b84f-8e44-4526-a386-268f4894b381 is in state STARTED 2025-07-12 13:42:39.027773 | orchestrator | 2025-07-12 13:42:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:42:42.102501 | orchestrator | 2025-07-12 13:42:42 | INFO  | Task e8baf052-5919-413d-a7de-52269672be2e is in state STARTED 2025-07-12 13:42:42.104277 | orchestrator | 2025-07-12 13:42:42 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:42:42.106814 | orchestrator | 2025-07-12 13:42:42 | INFO  | Task 7b97b5ff-2bd2-4cce-b57b-5008f4b37784 is in state STARTED 2025-07-12 13:42:42.106848 | orchestrator | 2025-07-12 13:42:42 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:42:42.109213 | orchestrator | 2025-07-12 13:42:42 | INFO  | Task 67095fa2-1492-4ac1-ac5f-6034a1d25884 is in state STARTED 2025-07-12 13:42:42.110082 | orchestrator | 2025-07-12 13:42:42 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:42:42.113735 | orchestrator | 2025-07-12 13:42:42 | INFO  | Task 1b90b84f-8e44-4526-a386-268f4894b381 is in state STARTED 2025-07-12 13:42:42.113781 | orchestrator | 2025-07-12 13:42:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:42:45.193406 | orchestrator | 2025-07-12 13:42:45.193523 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-07-12 13:42:45.193540 | orchestrator | 2025-07-12 13:42:45.193552 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-07-12 13:42:45.193564 | orchestrator | Saturday 12 July 2025 13:42:26 +0000 (0:00:00.387) 0:00:00.387 ********* 2025-07-12 13:42:45.193575 | orchestrator | changed: [testbed-manager] 2025-07-12 13:42:45.193587 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:42:45.193598 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:42:45.193609 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:42:45.193620 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:42:45.193631 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:42:45.193642 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:42:45.193653 | orchestrator | 2025-07-12 13:42:45.193663 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-07-12 13:42:45.193674 | orchestrator | Saturday 12 July 2025 13:42:30 +0000 (0:00:03.782) 0:00:04.170 ********* 2025-07-12 13:42:45.193686 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-07-12 13:42:45.193697 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-07-12 13:42:45.193708 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-07-12 13:42:45.193718 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-07-12 13:42:45.193729 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-07-12 13:42:45.193739 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-07-12 13:42:45.193750 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-07-12 13:42:45.193761 | orchestrator | 2025-07-12 13:42:45.193772 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-07-12 13:42:45.193808 | orchestrator | Saturday 12 July 2025 13:42:32 +0000 (0:00:02.322) 0:00:06.493 ********* 2025-07-12 13:42:45.193824 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 13:42:31.378897', 'end': '2025-07-12 13:42:31.387003', 'delta': '0:00:00.008106', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 13:42:45.193840 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 13:42:31.326349', 'end': '2025-07-12 13:42:31.331650', 'delta': '0:00:00.005301', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 13:42:45.193852 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 13:42:31.332254', 'end': '2025-07-12 13:42:31.340978', 'delta': '0:00:00.008724', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 13:42:45.193958 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 13:42:31.789874', 'end': '2025-07-12 13:42:31.797934', 'delta': '0:00:00.008060', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 13:42:45.193975 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 13:42:31.982719', 'end': '2025-07-12 13:42:31.991563', 'delta': '0:00:00.008844', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 13:42:45.193998 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 13:42:32.230901', 'end': '2025-07-12 13:42:32.239512', 'delta': '0:00:00.008611', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 13:42:45.194011 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 13:42:32.358686', 'end': '2025-07-12 13:42:32.367487', 'delta': '0:00:00.008801', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 13:42:45.194089 | orchestrator | 2025-07-12 13:42:45.194110 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-07-12 13:42:45.194130 | orchestrator | Saturday 12 July 2025 13:42:35 +0000 (0:00:02.630) 0:00:09.123 ********* 2025-07-12 13:42:45.194149 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-07-12 13:42:45.194165 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-07-12 13:42:45.194177 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-07-12 13:42:45.194189 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-07-12 13:42:45.194201 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-07-12 13:42:45.194212 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-07-12 13:42:45.194224 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-07-12 13:42:45.194236 | orchestrator | 2025-07-12 13:42:45.194248 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-07-12 13:42:45.194259 | orchestrator | Saturday 12 July 2025 13:42:38 +0000 (0:00:03.182) 0:00:12.305 ********* 2025-07-12 13:42:45.194270 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-07-12 13:42:45.194280 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-07-12 13:42:45.194291 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-07-12 13:42:45.194301 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-07-12 13:42:45.194311 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-07-12 13:42:45.194322 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-07-12 13:42:45.194332 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-07-12 13:42:45.194371 | orchestrator | 2025-07-12 13:42:45.194389 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:42:45.194411 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:42:45.194424 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:42:45.194450 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:42:45.194462 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:42:45.194472 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:42:45.194483 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:42:45.194493 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:42:45.194504 | orchestrator | 2025-07-12 13:42:45.194514 | orchestrator | 2025-07-12 13:42:45.194525 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:42:45.194535 | orchestrator | Saturday 12 July 2025 13:42:43 +0000 (0:00:04.950) 0:00:17.256 ********* 2025-07-12 13:42:45.194546 | orchestrator | =============================================================================== 2025-07-12 13:42:45.194556 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 4.95s 2025-07-12 13:42:45.194567 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.78s 2025-07-12 13:42:45.194595 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 3.18s 2025-07-12 13:42:45.194606 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.63s 2025-07-12 13:42:45.194628 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.32s 2025-07-12 13:42:45.194639 | orchestrator | 2025-07-12 13:42:45 | INFO  | Task e8baf052-5919-413d-a7de-52269672be2e is in state SUCCESS 2025-07-12 13:42:45.194649 | orchestrator | 2025-07-12 13:42:45 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:42:45.202922 | orchestrator | 2025-07-12 13:42:45 | INFO  | Task 7b97b5ff-2bd2-4cce-b57b-5008f4b37784 is in state STARTED 2025-07-12 13:42:45.203399 | orchestrator | 2025-07-12 13:42:45 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:42:45.203790 | orchestrator | 2025-07-12 13:42:45 | INFO  | Task 67095fa2-1492-4ac1-ac5f-6034a1d25884 is in state STARTED 2025-07-12 13:42:45.204265 | orchestrator | 2025-07-12 13:42:45 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:42:45.205088 | orchestrator | 2025-07-12 13:42:45 | INFO  | Task 1b90b84f-8e44-4526-a386-268f4894b381 is in state STARTED 2025-07-12 13:42:45.205099 | orchestrator | 2025-07-12 13:42:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:42:48.251258 | orchestrator | 2025-07-12 13:42:48 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:42:48.251419 | orchestrator | 2025-07-12 13:42:48 | INFO  | Task 7b97b5ff-2bd2-4cce-b57b-5008f4b37784 is in state STARTED 2025-07-12 13:42:48.251611 | orchestrator | 2025-07-12 13:42:48 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:42:48.251628 | orchestrator | 2025-07-12 13:42:48 | INFO  | Task 67095fa2-1492-4ac1-ac5f-6034a1d25884 is in state STARTED 2025-07-12 13:42:48.251640 | orchestrator | 2025-07-12 13:42:48 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:42:48.251651 | orchestrator | 2025-07-12 13:42:48 | INFO  | Task 1e03776e-5fb0-4f28-bcea-e60dac14b396 is in state STARTED 2025-07-12 13:42:48.251662 | orchestrator | 2025-07-12 13:42:48 | INFO  | Task 1b90b84f-8e44-4526-a386-268f4894b381 is in state STARTED 2025-07-12 13:42:48.251714 | orchestrator | 2025-07-12 13:42:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:42:51.312940 | orchestrator | 2025-07-12 13:42:51 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:42:51.313215 | orchestrator | 2025-07-12 13:42:51 | INFO  | Task 7b97b5ff-2bd2-4cce-b57b-5008f4b37784 is in state STARTED 2025-07-12 13:42:51.313253 | orchestrator | 2025-07-12 13:42:51 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:42:51.314839 | orchestrator | 2025-07-12 13:42:51 | INFO  | Task 67095fa2-1492-4ac1-ac5f-6034a1d25884 is in state STARTED 2025-07-12 13:42:51.315556 | orchestrator | 2025-07-12 13:42:51 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:42:51.316978 | orchestrator | 2025-07-12 13:42:51 | INFO  | Task 1e03776e-5fb0-4f28-bcea-e60dac14b396 is in state STARTED 2025-07-12 13:42:51.318156 | orchestrator | 2025-07-12 13:42:51 | INFO  | Task 1b90b84f-8e44-4526-a386-268f4894b381 is in state STARTED 2025-07-12 13:42:51.318184 | orchestrator | 2025-07-12 13:42:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:42:54.364484 | orchestrator | 2025-07-12 13:42:54 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:42:54.364595 | orchestrator | 2025-07-12 13:42:54 | INFO  | Task 7b97b5ff-2bd2-4cce-b57b-5008f4b37784 is in state STARTED 2025-07-12 13:42:54.364611 | orchestrator | 2025-07-12 13:42:54 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:42:54.364949 | orchestrator | 2025-07-12 13:42:54 | INFO  | Task 67095fa2-1492-4ac1-ac5f-6034a1d25884 is in state STARTED 2025-07-12 13:42:54.365526 | orchestrator | 2025-07-12 13:42:54 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:42:54.366112 | orchestrator | 2025-07-12 13:42:54 | INFO  | Task 1e03776e-5fb0-4f28-bcea-e60dac14b396 is in state STARTED 2025-07-12 13:42:54.367298 | orchestrator | 2025-07-12 13:42:54 | INFO  | Task 1b90b84f-8e44-4526-a386-268f4894b381 is in state STARTED 2025-07-12 13:42:54.367348 | orchestrator | 2025-07-12 13:42:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:42:57.411734 | orchestrator | 2025-07-12 13:42:57 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:42:57.411844 | orchestrator | 2025-07-12 13:42:57 | INFO  | Task 7b97b5ff-2bd2-4cce-b57b-5008f4b37784 is in state STARTED 2025-07-12 13:42:57.418923 | orchestrator | 2025-07-12 13:42:57 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:42:57.421799 | orchestrator | 2025-07-12 13:42:57 | INFO  | Task 67095fa2-1492-4ac1-ac5f-6034a1d25884 is in state STARTED 2025-07-12 13:42:57.428638 | orchestrator | 2025-07-12 13:42:57 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:42:57.431885 | orchestrator | 2025-07-12 13:42:57 | INFO  | Task 1e03776e-5fb0-4f28-bcea-e60dac14b396 is in state STARTED 2025-07-12 13:42:57.438697 | orchestrator | 2025-07-12 13:42:57 | INFO  | Task 1b90b84f-8e44-4526-a386-268f4894b381 is in state STARTED 2025-07-12 13:42:57.438729 | orchestrator | 2025-07-12 13:42:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:00.523098 | orchestrator | 2025-07-12 13:43:00 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:43:00.523223 | orchestrator | 2025-07-12 13:43:00 | INFO  | Task 7b97b5ff-2bd2-4cce-b57b-5008f4b37784 is in state STARTED 2025-07-12 13:43:00.523723 | orchestrator | 2025-07-12 13:43:00 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:43:00.524552 | orchestrator | 2025-07-12 13:43:00 | INFO  | Task 67095fa2-1492-4ac1-ac5f-6034a1d25884 is in state STARTED 2025-07-12 13:43:00.525158 | orchestrator | 2025-07-12 13:43:00 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:43:00.526098 | orchestrator | 2025-07-12 13:43:00 | INFO  | Task 1e03776e-5fb0-4f28-bcea-e60dac14b396 is in state STARTED 2025-07-12 13:43:00.529141 | orchestrator | 2025-07-12 13:43:00 | INFO  | Task 1b90b84f-8e44-4526-a386-268f4894b381 is in state STARTED 2025-07-12 13:43:00.529179 | orchestrator | 2025-07-12 13:43:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:03.592490 | orchestrator | 2025-07-12 13:43:03 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:43:03.593006 | orchestrator | 2025-07-12 13:43:03 | INFO  | Task 7b97b5ff-2bd2-4cce-b57b-5008f4b37784 is in state SUCCESS 2025-07-12 13:43:03.595851 | orchestrator | 2025-07-12 13:43:03 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:43:03.596732 | orchestrator | 2025-07-12 13:43:03 | INFO  | Task 67095fa2-1492-4ac1-ac5f-6034a1d25884 is in state STARTED 2025-07-12 13:43:03.597512 | orchestrator | 2025-07-12 13:43:03 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:43:03.599544 | orchestrator | 2025-07-12 13:43:03 | INFO  | Task 1e03776e-5fb0-4f28-bcea-e60dac14b396 is in state STARTED 2025-07-12 13:43:03.600002 | orchestrator | 2025-07-12 13:43:03 | INFO  | Task 1b90b84f-8e44-4526-a386-268f4894b381 is in state STARTED 2025-07-12 13:43:03.600038 | orchestrator | 2025-07-12 13:43:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:06.640042 | orchestrator | 2025-07-12 13:43:06 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:43:06.642153 | orchestrator | 2025-07-12 13:43:06 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:43:06.643592 | orchestrator | 2025-07-12 13:43:06 | INFO  | Task 67095fa2-1492-4ac1-ac5f-6034a1d25884 is in state STARTED 2025-07-12 13:43:06.644763 | orchestrator | 2025-07-12 13:43:06 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:43:06.645551 | orchestrator | 2025-07-12 13:43:06 | INFO  | Task 1e03776e-5fb0-4f28-bcea-e60dac14b396 is in state STARTED 2025-07-12 13:43:06.647569 | orchestrator | 2025-07-12 13:43:06 | INFO  | Task 1b90b84f-8e44-4526-a386-268f4894b381 is in state STARTED 2025-07-12 13:43:06.647596 | orchestrator | 2025-07-12 13:43:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:09.700105 | orchestrator | 2025-07-12 13:43:09 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:43:09.700220 | orchestrator | 2025-07-12 13:43:09 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:43:09.700235 | orchestrator | 2025-07-12 13:43:09 | INFO  | Task 67095fa2-1492-4ac1-ac5f-6034a1d25884 is in state STARTED 2025-07-12 13:43:09.708066 | orchestrator | 2025-07-12 13:43:09 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:43:09.708124 | orchestrator | 2025-07-12 13:43:09 | INFO  | Task 1e03776e-5fb0-4f28-bcea-e60dac14b396 is in state STARTED 2025-07-12 13:43:09.708155 | orchestrator | 2025-07-12 13:43:09 | INFO  | Task 1b90b84f-8e44-4526-a386-268f4894b381 is in state STARTED 2025-07-12 13:43:09.708179 | orchestrator | 2025-07-12 13:43:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:12.754769 | orchestrator | 2025-07-12 13:43:12 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:43:12.754909 | orchestrator | 2025-07-12 13:43:12 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:43:12.754925 | orchestrator | 2025-07-12 13:43:12 | INFO  | Task 67095fa2-1492-4ac1-ac5f-6034a1d25884 is in state STARTED 2025-07-12 13:43:12.754936 | orchestrator | 2025-07-12 13:43:12 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:43:12.754947 | orchestrator | 2025-07-12 13:43:12 | INFO  | Task 1e03776e-5fb0-4f28-bcea-e60dac14b396 is in state STARTED 2025-07-12 13:43:12.754958 | orchestrator | 2025-07-12 13:43:12 | INFO  | Task 1b90b84f-8e44-4526-a386-268f4894b381 is in state STARTED 2025-07-12 13:43:12.754969 | orchestrator | 2025-07-12 13:43:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:15.794322 | orchestrator | 2025-07-12 13:43:15 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:43:15.794431 | orchestrator | 2025-07-12 13:43:15 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:43:15.795069 | orchestrator | 2025-07-12 13:43:15 | INFO  | Task 67095fa2-1492-4ac1-ac5f-6034a1d25884 is in state SUCCESS 2025-07-12 13:43:15.796688 | orchestrator | 2025-07-12 13:43:15 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:43:15.803896 | orchestrator | 2025-07-12 13:43:15 | INFO  | Task 1e03776e-5fb0-4f28-bcea-e60dac14b396 is in state STARTED 2025-07-12 13:43:15.806225 | orchestrator | 2025-07-12 13:43:15 | INFO  | Task 1b90b84f-8e44-4526-a386-268f4894b381 is in state STARTED 2025-07-12 13:43:15.806259 | orchestrator | 2025-07-12 13:43:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:18.844131 | orchestrator | 2025-07-12 13:43:18 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:43:18.846087 | orchestrator | 2025-07-12 13:43:18 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:43:18.846804 | orchestrator | 2025-07-12 13:43:18 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:43:18.848317 | orchestrator | 2025-07-12 13:43:18 | INFO  | Task 1e03776e-5fb0-4f28-bcea-e60dac14b396 is in state STARTED 2025-07-12 13:43:18.849930 | orchestrator | 2025-07-12 13:43:18 | INFO  | Task 1b90b84f-8e44-4526-a386-268f4894b381 is in state STARTED 2025-07-12 13:43:18.849952 | orchestrator | 2025-07-12 13:43:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:21.903368 | orchestrator | 2025-07-12 13:43:21 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:43:21.904297 | orchestrator | 2025-07-12 13:43:21 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:43:21.905138 | orchestrator | 2025-07-12 13:43:21 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:43:21.906466 | orchestrator | 2025-07-12 13:43:21 | INFO  | Task 1e03776e-5fb0-4f28-bcea-e60dac14b396 is in state STARTED 2025-07-12 13:43:21.907978 | orchestrator | 2025-07-12 13:43:21 | INFO  | Task 1b90b84f-8e44-4526-a386-268f4894b381 is in state STARTED 2025-07-12 13:43:21.908009 | orchestrator | 2025-07-12 13:43:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:24.950128 | orchestrator | 2025-07-12 13:43:24 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:43:24.950231 | orchestrator | 2025-07-12 13:43:24 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:43:24.950953 | orchestrator | 2025-07-12 13:43:24 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:43:24.952802 | orchestrator | 2025-07-12 13:43:24 | INFO  | Task 1e03776e-5fb0-4f28-bcea-e60dac14b396 is in state STARTED 2025-07-12 13:43:24.954588 | orchestrator | 2025-07-12 13:43:24 | INFO  | Task 1b90b84f-8e44-4526-a386-268f4894b381 is in state STARTED 2025-07-12 13:43:24.954623 | orchestrator | 2025-07-12 13:43:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:28.018013 | orchestrator | 2025-07-12 13:43:28 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:43:28.020388 | orchestrator | 2025-07-12 13:43:28 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:43:28.021283 | orchestrator | 2025-07-12 13:43:28 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:43:28.026142 | orchestrator | 2025-07-12 13:43:28 | INFO  | Task 1e03776e-5fb0-4f28-bcea-e60dac14b396 is in state STARTED 2025-07-12 13:43:28.026197 | orchestrator | 2025-07-12 13:43:28 | INFO  | Task 1b90b84f-8e44-4526-a386-268f4894b381 is in state STARTED 2025-07-12 13:43:28.026218 | orchestrator | 2025-07-12 13:43:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:31.074428 | orchestrator | 2025-07-12 13:43:31 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:43:31.074537 | orchestrator | 2025-07-12 13:43:31 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:43:31.075963 | orchestrator | 2025-07-12 13:43:31 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:43:31.078839 | orchestrator | 2025-07-12 13:43:31 | INFO  | Task 1e03776e-5fb0-4f28-bcea-e60dac14b396 is in state STARTED 2025-07-12 13:43:31.078895 | orchestrator | 2025-07-12 13:43:31 | INFO  | Task 1b90b84f-8e44-4526-a386-268f4894b381 is in state STARTED 2025-07-12 13:43:31.078908 | orchestrator | 2025-07-12 13:43:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:34.121724 | orchestrator | 2025-07-12 13:43:34 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:43:34.123112 | orchestrator | 2025-07-12 13:43:34 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:43:34.127031 | orchestrator | 2025-07-12 13:43:34 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:43:34.127086 | orchestrator | 2025-07-12 13:43:34 | INFO  | Task 1e03776e-5fb0-4f28-bcea-e60dac14b396 is in state STARTED 2025-07-12 13:43:34.128181 | orchestrator | 2025-07-12 13:43:34 | INFO  | Task 1b90b84f-8e44-4526-a386-268f4894b381 is in state SUCCESS 2025-07-12 13:43:34.128205 | orchestrator | 2025-07-12 13:43:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:34.131181 | orchestrator | 2025-07-12 13:43:34.131278 | orchestrator | 2025-07-12 13:43:34.131299 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-07-12 13:43:34.131317 | orchestrator | 2025-07-12 13:43:34.131336 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-07-12 13:43:34.131355 | orchestrator | Saturday 12 July 2025 13:42:27 +0000 (0:00:00.290) 0:00:00.290 ********* 2025-07-12 13:43:34.131373 | orchestrator | ok: [testbed-manager] => { 2025-07-12 13:43:34.131394 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-07-12 13:43:34.131413 | orchestrator | } 2025-07-12 13:43:34.131431 | orchestrator | 2025-07-12 13:43:34.131449 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-07-12 13:43:34.131467 | orchestrator | Saturday 12 July 2025 13:42:27 +0000 (0:00:00.143) 0:00:00.434 ********* 2025-07-12 13:43:34.131486 | orchestrator | ok: [testbed-manager] 2025-07-12 13:43:34.131534 | orchestrator | 2025-07-12 13:43:34.131554 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-07-12 13:43:34.131572 | orchestrator | Saturday 12 July 2025 13:42:29 +0000 (0:00:01.539) 0:00:01.973 ********* 2025-07-12 13:43:34.131590 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-07-12 13:43:34.131608 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-07-12 13:43:34.131626 | orchestrator | 2025-07-12 13:43:34.131645 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-07-12 13:43:34.131664 | orchestrator | Saturday 12 July 2025 13:42:30 +0000 (0:00:01.454) 0:00:03.427 ********* 2025-07-12 13:43:34.131682 | orchestrator | changed: [testbed-manager] 2025-07-12 13:43:34.131700 | orchestrator | 2025-07-12 13:43:34.131761 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-07-12 13:43:34.131782 | orchestrator | Saturday 12 July 2025 13:42:33 +0000 (0:00:02.931) 0:00:06.358 ********* 2025-07-12 13:43:34.131802 | orchestrator | changed: [testbed-manager] 2025-07-12 13:43:34.131822 | orchestrator | 2025-07-12 13:43:34.131844 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-07-12 13:43:34.131864 | orchestrator | Saturday 12 July 2025 13:42:34 +0000 (0:00:01.419) 0:00:07.778 ********* 2025-07-12 13:43:34.131884 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-07-12 13:43:34.131905 | orchestrator | ok: [testbed-manager] 2025-07-12 13:43:34.131924 | orchestrator | 2025-07-12 13:43:34.131943 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-07-12 13:43:34.131965 | orchestrator | Saturday 12 July 2025 13:43:00 +0000 (0:00:25.389) 0:00:33.167 ********* 2025-07-12 13:43:34.131985 | orchestrator | changed: [testbed-manager] 2025-07-12 13:43:34.132006 | orchestrator | 2025-07-12 13:43:34.132026 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:43:34.132047 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:43:34.132069 | orchestrator | 2025-07-12 13:43:34.132089 | orchestrator | 2025-07-12 13:43:34.132108 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:43:34.132130 | orchestrator | Saturday 12 July 2025 13:43:02 +0000 (0:00:02.301) 0:00:35.468 ********* 2025-07-12 13:43:34.132149 | orchestrator | =============================================================================== 2025-07-12 13:43:34.132170 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.39s 2025-07-12 13:43:34.132185 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.93s 2025-07-12 13:43:34.132196 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.30s 2025-07-12 13:43:34.132207 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.54s 2025-07-12 13:43:34.132217 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.45s 2025-07-12 13:43:34.132228 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.42s 2025-07-12 13:43:34.132278 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.14s 2025-07-12 13:43:34.132292 | orchestrator | 2025-07-12 13:43:34.132310 | orchestrator | 2025-07-12 13:43:34.132329 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-07-12 13:43:34.132347 | orchestrator | 2025-07-12 13:43:34.132364 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-07-12 13:43:34.132380 | orchestrator | Saturday 12 July 2025 13:42:27 +0000 (0:00:01.094) 0:00:01.094 ********* 2025-07-12 13:43:34.132396 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-07-12 13:43:34.132414 | orchestrator | 2025-07-12 13:43:34.132430 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-07-12 13:43:34.132446 | orchestrator | Saturday 12 July 2025 13:42:28 +0000 (0:00:00.680) 0:00:01.775 ********* 2025-07-12 13:43:34.132480 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-07-12 13:43:34.132498 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-07-12 13:43:34.132518 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-07-12 13:43:34.132535 | orchestrator | 2025-07-12 13:43:34.132555 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-07-12 13:43:34.132566 | orchestrator | Saturday 12 July 2025 13:42:30 +0000 (0:00:01.822) 0:00:03.598 ********* 2025-07-12 13:43:34.132585 | orchestrator | changed: [testbed-manager] 2025-07-12 13:43:34.132595 | orchestrator | 2025-07-12 13:43:34.132606 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-07-12 13:43:34.132617 | orchestrator | Saturday 12 July 2025 13:42:32 +0000 (0:00:01.982) 0:00:05.580 ********* 2025-07-12 13:43:34.132646 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-07-12 13:43:34.132657 | orchestrator | ok: [testbed-manager] 2025-07-12 13:43:34.132668 | orchestrator | 2025-07-12 13:43:34.132679 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-07-12 13:43:34.132689 | orchestrator | Saturday 12 July 2025 13:43:07 +0000 (0:00:35.599) 0:00:41.179 ********* 2025-07-12 13:43:34.132700 | orchestrator | changed: [testbed-manager] 2025-07-12 13:43:34.132710 | orchestrator | 2025-07-12 13:43:34.132721 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-07-12 13:43:34.132732 | orchestrator | Saturday 12 July 2025 13:43:08 +0000 (0:00:00.957) 0:00:42.137 ********* 2025-07-12 13:43:34.132742 | orchestrator | ok: [testbed-manager] 2025-07-12 13:43:34.132753 | orchestrator | 2025-07-12 13:43:34.132764 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-07-12 13:43:34.132775 | orchestrator | Saturday 12 July 2025 13:43:09 +0000 (0:00:00.857) 0:00:42.995 ********* 2025-07-12 13:43:34.132785 | orchestrator | changed: [testbed-manager] 2025-07-12 13:43:34.132796 | orchestrator | 2025-07-12 13:43:34.132806 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-07-12 13:43:34.132817 | orchestrator | Saturday 12 July 2025 13:43:11 +0000 (0:00:01.803) 0:00:44.798 ********* 2025-07-12 13:43:34.132828 | orchestrator | changed: [testbed-manager] 2025-07-12 13:43:34.132838 | orchestrator | 2025-07-12 13:43:34.132848 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-07-12 13:43:34.132859 | orchestrator | Saturday 12 July 2025 13:43:12 +0000 (0:00:00.857) 0:00:45.656 ********* 2025-07-12 13:43:34.132869 | orchestrator | changed: [testbed-manager] 2025-07-12 13:43:34.132880 | orchestrator | 2025-07-12 13:43:34.132891 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-07-12 13:43:34.132901 | orchestrator | Saturday 12 July 2025 13:43:12 +0000 (0:00:00.704) 0:00:46.360 ********* 2025-07-12 13:43:34.132911 | orchestrator | ok: [testbed-manager] 2025-07-12 13:43:34.132922 | orchestrator | 2025-07-12 13:43:34.132932 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:43:34.132943 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:43:34.132954 | orchestrator | 2025-07-12 13:43:34.132964 | orchestrator | 2025-07-12 13:43:34.132975 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:43:34.132985 | orchestrator | Saturday 12 July 2025 13:43:13 +0000 (0:00:00.372) 0:00:46.733 ********* 2025-07-12 13:43:34.132996 | orchestrator | =============================================================================== 2025-07-12 13:43:34.133006 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 35.60s 2025-07-12 13:43:34.133017 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.98s 2025-07-12 13:43:34.133027 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.82s 2025-07-12 13:43:34.133045 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.80s 2025-07-12 13:43:34.133056 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.96s 2025-07-12 13:43:34.133066 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.86s 2025-07-12 13:43:34.133077 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.86s 2025-07-12 13:43:34.133087 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.70s 2025-07-12 13:43:34.133098 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.67s 2025-07-12 13:43:34.133108 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.37s 2025-07-12 13:43:34.133119 | orchestrator | 2025-07-12 13:43:34.133130 | orchestrator | 2025-07-12 13:43:34.133140 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 13:43:34.133151 | orchestrator | 2025-07-12 13:43:34.133162 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 13:43:34.133172 | orchestrator | Saturday 12 July 2025 13:42:28 +0000 (0:00:00.443) 0:00:00.443 ********* 2025-07-12 13:43:34.133183 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-07-12 13:43:34.133193 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-07-12 13:43:34.133204 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-07-12 13:43:34.133214 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-07-12 13:43:34.133225 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-07-12 13:43:34.133259 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-07-12 13:43:34.133270 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-07-12 13:43:34.133281 | orchestrator | 2025-07-12 13:43:34.133291 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-07-12 13:43:34.133302 | orchestrator | 2025-07-12 13:43:34.133312 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-07-12 13:43:34.133323 | orchestrator | Saturday 12 July 2025 13:42:31 +0000 (0:00:02.821) 0:00:03.264 ********* 2025-07-12 13:43:34.133349 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:43:34.133368 | orchestrator | 2025-07-12 13:43:34.133383 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-07-12 13:43:34.133394 | orchestrator | Saturday 12 July 2025 13:42:34 +0000 (0:00:02.869) 0:00:06.134 ********* 2025-07-12 13:43:34.133405 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:43:34.133415 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:43:34.133426 | orchestrator | ok: [testbed-manager] 2025-07-12 13:43:34.133437 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:43:34.133448 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:43:34.133465 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:43:34.133476 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:43:34.133486 | orchestrator | 2025-07-12 13:43:34.133497 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-07-12 13:43:34.133507 | orchestrator | Saturday 12 July 2025 13:42:38 +0000 (0:00:03.360) 0:00:09.495 ********* 2025-07-12 13:43:34.133518 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:43:34.133529 | orchestrator | ok: [testbed-manager] 2025-07-12 13:43:34.133539 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:43:34.133550 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:43:34.133560 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:43:34.133571 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:43:34.133581 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:43:34.133592 | orchestrator | 2025-07-12 13:43:34.133602 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-07-12 13:43:34.133613 | orchestrator | Saturday 12 July 2025 13:42:43 +0000 (0:00:05.253) 0:00:14.748 ********* 2025-07-12 13:43:34.133630 | orchestrator | changed: [testbed-manager] 2025-07-12 13:43:34.133641 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:43:34.133651 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:43:34.133662 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:43:34.133672 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:43:34.133683 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:43:34.133693 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:43:34.133704 | orchestrator | 2025-07-12 13:43:34.133714 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-07-12 13:43:34.133725 | orchestrator | Saturday 12 July 2025 13:42:45 +0000 (0:00:02.675) 0:00:17.424 ********* 2025-07-12 13:43:34.133735 | orchestrator | changed: [testbed-manager] 2025-07-12 13:43:34.133746 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:43:34.133756 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:43:34.133767 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:43:34.133777 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:43:34.133788 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:43:34.133798 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:43:34.133809 | orchestrator | 2025-07-12 13:43:34.133819 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-07-12 13:43:34.133830 | orchestrator | Saturday 12 July 2025 13:42:55 +0000 (0:00:10.020) 0:00:27.444 ********* 2025-07-12 13:43:34.133840 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:43:34.133851 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:43:34.133861 | orchestrator | changed: [testbed-manager] 2025-07-12 13:43:34.133872 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:43:34.133882 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:43:34.133892 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:43:34.133903 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:43:34.133913 | orchestrator | 2025-07-12 13:43:34.133924 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-07-12 13:43:34.133935 | orchestrator | Saturday 12 July 2025 13:43:12 +0000 (0:00:16.597) 0:00:44.042 ********* 2025-07-12 13:43:34.133946 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:43:34.133958 | orchestrator | 2025-07-12 13:43:34.133969 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-07-12 13:43:34.133979 | orchestrator | Saturday 12 July 2025 13:43:13 +0000 (0:00:01.337) 0:00:45.380 ********* 2025-07-12 13:43:34.133990 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-07-12 13:43:34.134001 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-07-12 13:43:34.134011 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-07-12 13:43:34.134079 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-07-12 13:43:34.134090 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-07-12 13:43:34.134101 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-07-12 13:43:34.134111 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-07-12 13:43:34.134122 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-07-12 13:43:34.134132 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-07-12 13:43:34.134143 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-07-12 13:43:34.134154 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-07-12 13:43:34.134164 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-07-12 13:43:34.134175 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-07-12 13:43:34.134185 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-07-12 13:43:34.134196 | orchestrator | 2025-07-12 13:43:34.134206 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-07-12 13:43:34.134250 | orchestrator | Saturday 12 July 2025 13:43:18 +0000 (0:00:04.250) 0:00:49.630 ********* 2025-07-12 13:43:34.134270 | orchestrator | ok: [testbed-manager] 2025-07-12 13:43:34.134287 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:43:34.134306 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:43:34.134326 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:43:34.134343 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:43:34.134361 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:43:34.134372 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:43:34.134383 | orchestrator | 2025-07-12 13:43:34.134393 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-07-12 13:43:34.134404 | orchestrator | Saturday 12 July 2025 13:43:19 +0000 (0:00:01.361) 0:00:50.991 ********* 2025-07-12 13:43:34.134415 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:43:34.134425 | orchestrator | changed: [testbed-manager] 2025-07-12 13:43:34.134436 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:43:34.134446 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:43:34.134463 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:43:34.134473 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:43:34.134484 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:43:34.134494 | orchestrator | 2025-07-12 13:43:34.134505 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-07-12 13:43:34.134524 | orchestrator | Saturday 12 July 2025 13:43:21 +0000 (0:00:02.412) 0:00:53.403 ********* 2025-07-12 13:43:34.134535 | orchestrator | ok: [testbed-manager] 2025-07-12 13:43:34.134546 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:43:34.134556 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:43:34.134567 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:43:34.134577 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:43:34.134587 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:43:34.134598 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:43:34.134608 | orchestrator | 2025-07-12 13:43:34.134619 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-07-12 13:43:34.134630 | orchestrator | Saturday 12 July 2025 13:43:23 +0000 (0:00:01.657) 0:00:55.061 ********* 2025-07-12 13:43:34.134640 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:43:34.134651 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:43:34.134661 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:43:34.134672 | orchestrator | ok: [testbed-manager] 2025-07-12 13:43:34.134682 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:43:34.134693 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:43:34.134703 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:43:34.134713 | orchestrator | 2025-07-12 13:43:34.134724 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-07-12 13:43:34.134734 | orchestrator | Saturday 12 July 2025 13:43:25 +0000 (0:00:01.885) 0:00:56.947 ********* 2025-07-12 13:43:34.134745 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-07-12 13:43:34.134757 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:43:34.134769 | orchestrator | 2025-07-12 13:43:34.134779 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-07-12 13:43:34.134790 | orchestrator | Saturday 12 July 2025 13:43:27 +0000 (0:00:01.587) 0:00:58.534 ********* 2025-07-12 13:43:34.134801 | orchestrator | changed: [testbed-manager] 2025-07-12 13:43:34.134811 | orchestrator | 2025-07-12 13:43:34.134822 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-07-12 13:43:34.134832 | orchestrator | Saturday 12 July 2025 13:43:29 +0000 (0:00:02.315) 0:01:00.850 ********* 2025-07-12 13:43:34.134843 | orchestrator | changed: [testbed-manager] 2025-07-12 13:43:34.134853 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:43:34.134864 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:43:34.134874 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:43:34.134893 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:43:34.134904 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:43:34.134914 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:43:34.134925 | orchestrator | 2025-07-12 13:43:34.134935 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:43:34.134946 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:43:34.134956 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:43:34.134967 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:43:34.134978 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:43:34.134988 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:43:34.134999 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:43:34.135009 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:43:34.135020 | orchestrator | 2025-07-12 13:43:34.135031 | orchestrator | 2025-07-12 13:43:34.135041 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:43:34.135058 | orchestrator | Saturday 12 July 2025 13:43:32 +0000 (0:00:03.454) 0:01:04.305 ********* 2025-07-12 13:43:34.135075 | orchestrator | =============================================================================== 2025-07-12 13:43:34.135093 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 16.60s 2025-07-12 13:43:34.135110 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.02s 2025-07-12 13:43:34.135128 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 5.25s 2025-07-12 13:43:34.135145 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.25s 2025-07-12 13:43:34.135164 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.45s 2025-07-12 13:43:34.135181 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 3.36s 2025-07-12 13:43:34.135200 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.87s 2025-07-12 13:43:34.135218 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.82s 2025-07-12 13:43:34.135317 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.68s 2025-07-12 13:43:34.135337 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.41s 2025-07-12 13:43:34.135355 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.32s 2025-07-12 13:43:34.135384 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.89s 2025-07-12 13:43:34.135402 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.66s 2025-07-12 13:43:34.135418 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.59s 2025-07-12 13:43:34.135434 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.36s 2025-07-12 13:43:34.135452 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.34s 2025-07-12 13:43:37.157013 | orchestrator | 2025-07-12 13:43:37 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:43:37.158364 | orchestrator | 2025-07-12 13:43:37 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:43:37.160254 | orchestrator | 2025-07-12 13:43:37 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:43:37.161785 | orchestrator | 2025-07-12 13:43:37 | INFO  | Task 1e03776e-5fb0-4f28-bcea-e60dac14b396 is in state STARTED 2025-07-12 13:43:37.162351 | orchestrator | 2025-07-12 13:43:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:40.203447 | orchestrator | 2025-07-12 13:43:40 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:43:40.205499 | orchestrator | 2025-07-12 13:43:40 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:43:40.207943 | orchestrator | 2025-07-12 13:43:40 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:43:40.210437 | orchestrator | 2025-07-12 13:43:40 | INFO  | Task 1e03776e-5fb0-4f28-bcea-e60dac14b396 is in state STARTED 2025-07-12 13:43:40.211085 | orchestrator | 2025-07-12 13:43:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:43.263648 | orchestrator | 2025-07-12 13:43:43 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:43:43.267752 | orchestrator | 2025-07-12 13:43:43 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:43:43.272520 | orchestrator | 2025-07-12 13:43:43 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:43:43.273800 | orchestrator | 2025-07-12 13:43:43 | INFO  | Task 1e03776e-5fb0-4f28-bcea-e60dac14b396 is in state STARTED 2025-07-12 13:43:43.273832 | orchestrator | 2025-07-12 13:43:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:46.310119 | orchestrator | 2025-07-12 13:43:46 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:43:46.311603 | orchestrator | 2025-07-12 13:43:46 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:43:46.313870 | orchestrator | 2025-07-12 13:43:46 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:43:46.317709 | orchestrator | 2025-07-12 13:43:46 | INFO  | Task 1e03776e-5fb0-4f28-bcea-e60dac14b396 is in state STARTED 2025-07-12 13:43:46.317735 | orchestrator | 2025-07-12 13:43:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:49.367287 | orchestrator | 2025-07-12 13:43:49 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:43:49.370814 | orchestrator | 2025-07-12 13:43:49 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:43:49.370869 | orchestrator | 2025-07-12 13:43:49 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:43:49.371460 | orchestrator | 2025-07-12 13:43:49 | INFO  | Task 1e03776e-5fb0-4f28-bcea-e60dac14b396 is in state STARTED 2025-07-12 13:43:49.371939 | orchestrator | 2025-07-12 13:43:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:52.425273 | orchestrator | 2025-07-12 13:43:52 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:43:52.425976 | orchestrator | 2025-07-12 13:43:52 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:43:52.430059 | orchestrator | 2025-07-12 13:43:52 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:43:52.431798 | orchestrator | 2025-07-12 13:43:52 | INFO  | Task 1e03776e-5fb0-4f28-bcea-e60dac14b396 is in state STARTED 2025-07-12 13:43:52.431820 | orchestrator | 2025-07-12 13:43:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:55.471558 | orchestrator | 2025-07-12 13:43:55 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:43:55.472884 | orchestrator | 2025-07-12 13:43:55 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:43:55.473480 | orchestrator | 2025-07-12 13:43:55 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:43:55.475798 | orchestrator | 2025-07-12 13:43:55 | INFO  | Task 1e03776e-5fb0-4f28-bcea-e60dac14b396 is in state STARTED 2025-07-12 13:43:55.475825 | orchestrator | 2025-07-12 13:43:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:58.529965 | orchestrator | 2025-07-12 13:43:58 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:43:58.530798 | orchestrator | 2025-07-12 13:43:58 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:43:58.531232 | orchestrator | 2025-07-12 13:43:58 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:43:58.532207 | orchestrator | 2025-07-12 13:43:58 | INFO  | Task 1e03776e-5fb0-4f28-bcea-e60dac14b396 is in state STARTED 2025-07-12 13:43:58.532307 | orchestrator | 2025-07-12 13:43:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:01.586548 | orchestrator | 2025-07-12 13:44:01 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:44:01.592090 | orchestrator | 2025-07-12 13:44:01 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:44:01.598391 | orchestrator | 2025-07-12 13:44:01 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:44:01.598523 | orchestrator | 2025-07-12 13:44:01 | INFO  | Task 1e03776e-5fb0-4f28-bcea-e60dac14b396 is in state STARTED 2025-07-12 13:44:01.598543 | orchestrator | 2025-07-12 13:44:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:04.660101 | orchestrator | 2025-07-12 13:44:04 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:44:04.660398 | orchestrator | 2025-07-12 13:44:04 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:44:04.666197 | orchestrator | 2025-07-12 13:44:04 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:44:04.666277 | orchestrator | 2025-07-12 13:44:04 | INFO  | Task 1e03776e-5fb0-4f28-bcea-e60dac14b396 is in state STARTED 2025-07-12 13:44:04.666293 | orchestrator | 2025-07-12 13:44:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:07.707492 | orchestrator | 2025-07-12 13:44:07 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:44:07.710564 | orchestrator | 2025-07-12 13:44:07 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:44:07.711702 | orchestrator | 2025-07-12 13:44:07 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:44:07.712724 | orchestrator | 2025-07-12 13:44:07 | INFO  | Task 1e03776e-5fb0-4f28-bcea-e60dac14b396 is in state SUCCESS 2025-07-12 13:44:07.713949 | orchestrator | 2025-07-12 13:44:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:10.760439 | orchestrator | 2025-07-12 13:44:10 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:44:10.762835 | orchestrator | 2025-07-12 13:44:10 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:44:10.764889 | orchestrator | 2025-07-12 13:44:10 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:44:10.764930 | orchestrator | 2025-07-12 13:44:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:13.819555 | orchestrator | 2025-07-12 13:44:13 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:44:13.820787 | orchestrator | 2025-07-12 13:44:13 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:44:13.822826 | orchestrator | 2025-07-12 13:44:13 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:44:13.822850 | orchestrator | 2025-07-12 13:44:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:16.866483 | orchestrator | 2025-07-12 13:44:16 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:44:16.868340 | orchestrator | 2025-07-12 13:44:16 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:44:16.869991 | orchestrator | 2025-07-12 13:44:16 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:44:16.870091 | orchestrator | 2025-07-12 13:44:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:19.930230 | orchestrator | 2025-07-12 13:44:19 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:44:19.931324 | orchestrator | 2025-07-12 13:44:19 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:44:19.932734 | orchestrator | 2025-07-12 13:44:19 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:44:19.933226 | orchestrator | 2025-07-12 13:44:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:22.974395 | orchestrator | 2025-07-12 13:44:22 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:44:22.975919 | orchestrator | 2025-07-12 13:44:22 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:44:22.979410 | orchestrator | 2025-07-12 13:44:22 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:44:22.979440 | orchestrator | 2025-07-12 13:44:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:26.056394 | orchestrator | 2025-07-12 13:44:26 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:44:26.059634 | orchestrator | 2025-07-12 13:44:26 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:44:26.063741 | orchestrator | 2025-07-12 13:44:26 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:44:26.064756 | orchestrator | 2025-07-12 13:44:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:29.115550 | orchestrator | 2025-07-12 13:44:29 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:44:29.118790 | orchestrator | 2025-07-12 13:44:29 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:44:29.122625 | orchestrator | 2025-07-12 13:44:29 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:44:29.122906 | orchestrator | 2025-07-12 13:44:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:32.167393 | orchestrator | 2025-07-12 13:44:32 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:44:32.167506 | orchestrator | 2025-07-12 13:44:32 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:44:32.171074 | orchestrator | 2025-07-12 13:44:32 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:44:32.171194 | orchestrator | 2025-07-12 13:44:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:35.221949 | orchestrator | 2025-07-12 13:44:35 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:44:35.223239 | orchestrator | 2025-07-12 13:44:35 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:44:35.225068 | orchestrator | 2025-07-12 13:44:35 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:44:35.225093 | orchestrator | 2025-07-12 13:44:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:38.282287 | orchestrator | 2025-07-12 13:44:38 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:44:38.284278 | orchestrator | 2025-07-12 13:44:38 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:44:38.285855 | orchestrator | 2025-07-12 13:44:38 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:44:38.286272 | orchestrator | 2025-07-12 13:44:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:41.342795 | orchestrator | 2025-07-12 13:44:41 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:44:41.344385 | orchestrator | 2025-07-12 13:44:41 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:44:41.344430 | orchestrator | 2025-07-12 13:44:41 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:44:41.344445 | orchestrator | 2025-07-12 13:44:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:44.386689 | orchestrator | 2025-07-12 13:44:44 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:44:44.388768 | orchestrator | 2025-07-12 13:44:44 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:44:44.391362 | orchestrator | 2025-07-12 13:44:44 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:44:44.391397 | orchestrator | 2025-07-12 13:44:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:47.446948 | orchestrator | 2025-07-12 13:44:47 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:44:47.451619 | orchestrator | 2025-07-12 13:44:47 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:44:47.458974 | orchestrator | 2025-07-12 13:44:47 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:44:47.459010 | orchestrator | 2025-07-12 13:44:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:50.506797 | orchestrator | 2025-07-12 13:44:50 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:44:50.509624 | orchestrator | 2025-07-12 13:44:50 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:44:50.512882 | orchestrator | 2025-07-12 13:44:50 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:44:50.513163 | orchestrator | 2025-07-12 13:44:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:53.556371 | orchestrator | 2025-07-12 13:44:53 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:44:53.556869 | orchestrator | 2025-07-12 13:44:53 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:44:53.557792 | orchestrator | 2025-07-12 13:44:53 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:44:53.557934 | orchestrator | 2025-07-12 13:44:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:56.598292 | orchestrator | 2025-07-12 13:44:56 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:44:56.599538 | orchestrator | 2025-07-12 13:44:56 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:44:56.601619 | orchestrator | 2025-07-12 13:44:56 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:44:56.601651 | orchestrator | 2025-07-12 13:44:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:59.643757 | orchestrator | 2025-07-12 13:44:59 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:44:59.644359 | orchestrator | 2025-07-12 13:44:59 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:44:59.645319 | orchestrator | 2025-07-12 13:44:59 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:44:59.645364 | orchestrator | 2025-07-12 13:44:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:02.677886 | orchestrator | 2025-07-12 13:45:02 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:45:02.680517 | orchestrator | 2025-07-12 13:45:02 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:45:02.683555 | orchestrator | 2025-07-12 13:45:02 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:45:02.683597 | orchestrator | 2025-07-12 13:45:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:05.723623 | orchestrator | 2025-07-12 13:45:05 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:45:05.723858 | orchestrator | 2025-07-12 13:45:05 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:45:05.725640 | orchestrator | 2025-07-12 13:45:05 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:45:05.725666 | orchestrator | 2025-07-12 13:45:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:08.783356 | orchestrator | 2025-07-12 13:45:08 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:45:08.785576 | orchestrator | 2025-07-12 13:45:08 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:45:08.786489 | orchestrator | 2025-07-12 13:45:08 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:45:08.786522 | orchestrator | 2025-07-12 13:45:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:11.828838 | orchestrator | 2025-07-12 13:45:11 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:45:12.011178 | orchestrator | 2025-07-12 13:45:11 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:45:12.011274 | orchestrator | 2025-07-12 13:45:11 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:45:12.011288 | orchestrator | 2025-07-12 13:45:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:14.872089 | orchestrator | 2025-07-12 13:45:14 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:45:14.874832 | orchestrator | 2025-07-12 13:45:14 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:45:14.877308 | orchestrator | 2025-07-12 13:45:14 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:45:14.877517 | orchestrator | 2025-07-12 13:45:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:17.922871 | orchestrator | 2025-07-12 13:45:17 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:45:17.925140 | orchestrator | 2025-07-12 13:45:17 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:45:17.927755 | orchestrator | 2025-07-12 13:45:17 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:45:17.927812 | orchestrator | 2025-07-12 13:45:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:20.984099 | orchestrator | 2025-07-12 13:45:20 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:45:20.986717 | orchestrator | 2025-07-12 13:45:20 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:45:20.989216 | orchestrator | 2025-07-12 13:45:20 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state STARTED 2025-07-12 13:45:20.989243 | orchestrator | 2025-07-12 13:45:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:24.039071 | orchestrator | 2025-07-12 13:45:24 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:45:24.039182 | orchestrator | 2025-07-12 13:45:24 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:45:24.041626 | orchestrator | 2025-07-12 13:45:24 | INFO  | Task 68c1eeba-a9f0-466d-8373-515bb76e342f is in state STARTED 2025-07-12 13:45:24.046001 | orchestrator | 2025-07-12 13:45:24 | INFO  | Task 4c96748f-0d19-4eda-b32d-a8d903c9ef5f is in state SUCCESS 2025-07-12 13:45:24.048875 | orchestrator | 2025-07-12 13:45:24.050124 | orchestrator | 2025-07-12 13:45:24.050144 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-07-12 13:45:24.050157 | orchestrator | 2025-07-12 13:45:24.050169 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-07-12 13:45:24.050181 | orchestrator | Saturday 12 July 2025 13:42:50 +0000 (0:00:00.296) 0:00:00.296 ********* 2025-07-12 13:45:24.050192 | orchestrator | ok: [testbed-manager] 2025-07-12 13:45:24.050204 | orchestrator | 2025-07-12 13:45:24.050215 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-07-12 13:45:24.050226 | orchestrator | Saturday 12 July 2025 13:42:51 +0000 (0:00:00.985) 0:00:01.281 ********* 2025-07-12 13:45:24.050237 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-07-12 13:45:24.050248 | orchestrator | 2025-07-12 13:45:24.050259 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-07-12 13:45:24.050270 | orchestrator | Saturday 12 July 2025 13:42:52 +0000 (0:00:00.672) 0:00:01.954 ********* 2025-07-12 13:45:24.050281 | orchestrator | changed: [testbed-manager] 2025-07-12 13:45:24.050292 | orchestrator | 2025-07-12 13:45:24.050303 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-07-12 13:45:24.050314 | orchestrator | Saturday 12 July 2025 13:42:53 +0000 (0:00:01.252) 0:00:03.207 ********* 2025-07-12 13:45:24.050324 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-07-12 13:45:24.050335 | orchestrator | ok: [testbed-manager] 2025-07-12 13:45:24.050374 | orchestrator | 2025-07-12 13:45:24.050386 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-07-12 13:45:24.050397 | orchestrator | Saturday 12 July 2025 13:44:02 +0000 (0:01:09.363) 0:01:12.570 ********* 2025-07-12 13:45:24.050408 | orchestrator | changed: [testbed-manager] 2025-07-12 13:45:24.050419 | orchestrator | 2025-07-12 13:45:24.050429 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:45:24.050440 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:45:24.050453 | orchestrator | 2025-07-12 13:45:24.050464 | orchestrator | 2025-07-12 13:45:24.050475 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:45:24.050486 | orchestrator | Saturday 12 July 2025 13:44:07 +0000 (0:00:04.142) 0:01:16.713 ********* 2025-07-12 13:45:24.050496 | orchestrator | =============================================================================== 2025-07-12 13:45:24.050507 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 69.36s 2025-07-12 13:45:24.050537 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.14s 2025-07-12 13:45:24.050548 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.25s 2025-07-12 13:45:24.050559 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.99s 2025-07-12 13:45:24.050571 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.67s 2025-07-12 13:45:24.050583 | orchestrator | 2025-07-12 13:45:24.050596 | orchestrator | 2025-07-12 13:45:24.050620 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-07-12 13:45:24.050632 | orchestrator | 2025-07-12 13:45:24.050645 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-07-12 13:45:24.050657 | orchestrator | Saturday 12 July 2025 13:42:20 +0000 (0:00:00.333) 0:00:00.333 ********* 2025-07-12 13:45:24.050670 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:45:24.050683 | orchestrator | 2025-07-12 13:45:24.050696 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-07-12 13:45:24.050708 | orchestrator | Saturday 12 July 2025 13:42:21 +0000 (0:00:01.374) 0:00:01.708 ********* 2025-07-12 13:45:24.050721 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 13:45:24.050733 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 13:45:24.050746 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 13:45:24.050758 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 13:45:24.050770 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 13:45:24.050783 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 13:45:24.050795 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 13:45:24.050808 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 13:45:24.050821 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 13:45:24.050834 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 13:45:24.050846 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 13:45:24.050859 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 13:45:24.050871 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 13:45:24.050884 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 13:45:24.050896 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 13:45:24.050908 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 13:45:24.050968 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 13:45:24.050982 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 13:45:24.050993 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 13:45:24.051003 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 13:45:24.051013 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 13:45:24.051079 | orchestrator | 2025-07-12 13:45:24.051093 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-07-12 13:45:24.051103 | orchestrator | Saturday 12 July 2025 13:42:26 +0000 (0:00:04.605) 0:00:06.313 ********* 2025-07-12 13:45:24.051123 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:45:24.051135 | orchestrator | 2025-07-12 13:45:24.051146 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-07-12 13:45:24.051157 | orchestrator | Saturday 12 July 2025 13:42:27 +0000 (0:00:01.281) 0:00:07.595 ********* 2025-07-12 13:45:24.051172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:24.051187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:24.051205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:24.051216 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:24.051227 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:24.051367 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.051395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.051428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.051447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.051474 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:24.051493 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:24.051518 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.051591 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.051606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.051626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.051638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.051649 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.051661 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.051679 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.051691 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.051702 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.051713 | orchestrator | 2025-07-12 13:45:24.051733 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-07-12 13:45:24.051774 | orchestrator | Saturday 12 July 2025 13:42:32 +0000 (0:00:05.124) 0:00:12.719 ********* 2025-07-12 13:45:24.051788 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 13:45:24.051799 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.051811 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.051822 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:45:24.051838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 13:45:24.051850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.051861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.051872 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:45:24.051883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 13:45:24.051942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.051956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.051967 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:45:24.051978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 13:45:24.051989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.052007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.052020 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 13:45:24.052195 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.052220 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.052262 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:45:24.052276 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:45:24.052290 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 13:45:24.052301 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.052313 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.052324 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:45:24.052341 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 13:45:24.052352 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.052362 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.052378 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:45:24.052387 | orchestrator | 2025-07-12 13:45:24.052397 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-07-12 13:45:24.052406 | orchestrator | Saturday 12 July 2025 13:42:34 +0000 (0:00:01.766) 0:00:14.486 ********* 2025-07-12 13:45:24.052416 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 13:45:24.052441 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.052459 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.052477 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:45:24.052494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 13:45:24.052517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.052534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.052551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 13:45:24.052588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.052650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.052669 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:45:24.052679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 13:45:24.052689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.052700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.052709 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:45:24.052725 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 13:45:24.052736 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.052753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.052763 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:45:24.052773 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:45:24.052783 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 13:45:24.052800 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.052810 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.052820 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:45:24.052830 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 13:45:24.052844 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.052854 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.052870 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:45:24.052880 | orchestrator | 2025-07-12 13:45:24.052890 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-07-12 13:45:24.052900 | orchestrator | Saturday 12 July 2025 13:42:37 +0000 (0:00:03.082) 0:00:17.568 ********* 2025-07-12 13:45:24.052909 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:45:24.052919 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:45:24.052928 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:45:24.052937 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:45:24.052947 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:45:24.052956 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:45:24.052965 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:45:24.052975 | orchestrator | 2025-07-12 13:45:24.052984 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-07-12 13:45:24.052994 | orchestrator | Saturday 12 July 2025 13:42:38 +0000 (0:00:01.295) 0:00:18.864 ********* 2025-07-12 13:45:24.053004 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:45:24.053013 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:45:24.053023 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:45:24.053054 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:45:24.053063 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:45:24.053152 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:45:24.053164 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:45:24.053173 | orchestrator | 2025-07-12 13:45:24.053183 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-07-12 13:45:24.053193 | orchestrator | Saturday 12 July 2025 13:42:40 +0000 (0:00:02.030) 0:00:20.895 ********* 2025-07-12 13:45:24.053209 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:24.053220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:24.053230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:24.053240 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:24.053258 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.053268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:24.053278 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:24.053288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.053305 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:24.053315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.053331 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.053350 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.053361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.053371 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.053381 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.053397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.053407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.053417 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.053433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.053446 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.053457 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.053466 | orchestrator | 2025-07-12 13:45:24.053476 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-07-12 13:45:24.053486 | orchestrator | Saturday 12 July 2025 13:42:46 +0000 (0:00:05.426) 0:00:26.322 ********* 2025-07-12 13:45:24.053496 | orchestrator | [WARNING]: Skipped 2025-07-12 13:45:24.053506 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-07-12 13:45:24.053515 | orchestrator | to this access issue: 2025-07-12 13:45:24.053525 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-07-12 13:45:24.053534 | orchestrator | directory 2025-07-12 13:45:24.053544 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 13:45:24.053553 | orchestrator | 2025-07-12 13:45:24.053563 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-07-12 13:45:24.053573 | orchestrator | Saturday 12 July 2025 13:42:48 +0000 (0:00:01.881) 0:00:28.203 ********* 2025-07-12 13:45:24.053582 | orchestrator | [WARNING]: Skipped 2025-07-12 13:45:24.053594 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-07-12 13:45:24.053611 | orchestrator | to this access issue: 2025-07-12 13:45:24.053627 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-07-12 13:45:24.053645 | orchestrator | directory 2025-07-12 13:45:24.053663 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 13:45:24.053680 | orchestrator | 2025-07-12 13:45:24.053691 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-07-12 13:45:24.053701 | orchestrator | Saturday 12 July 2025 13:42:49 +0000 (0:00:01.206) 0:00:29.409 ********* 2025-07-12 13:45:24.053710 | orchestrator | [WARNING]: Skipped 2025-07-12 13:45:24.053720 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-07-12 13:45:24.053729 | orchestrator | to this access issue: 2025-07-12 13:45:24.053739 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-07-12 13:45:24.053748 | orchestrator | directory 2025-07-12 13:45:24.053758 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 13:45:24.053767 | orchestrator | 2025-07-12 13:45:24.053783 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-07-12 13:45:24.053794 | orchestrator | Saturday 12 July 2025 13:42:50 +0000 (0:00:01.040) 0:00:30.449 ********* 2025-07-12 13:45:24.053812 | orchestrator | [WARNING]: Skipped 2025-07-12 13:45:24.053824 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-07-12 13:45:24.053834 | orchestrator | to this access issue: 2025-07-12 13:45:24.053846 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-07-12 13:45:24.053856 | orchestrator | directory 2025-07-12 13:45:24.053867 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 13:45:24.053878 | orchestrator | 2025-07-12 13:45:24.053889 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-07-12 13:45:24.053900 | orchestrator | Saturday 12 July 2025 13:42:51 +0000 (0:00:00.999) 0:00:31.449 ********* 2025-07-12 13:45:24.053911 | orchestrator | changed: [testbed-manager] 2025-07-12 13:45:24.053921 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:45:24.053932 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:45:24.053943 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:45:24.053954 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:45:24.053964 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:45:24.053975 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:45:24.053986 | orchestrator | 2025-07-12 13:45:24.053997 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-07-12 13:45:24.054007 | orchestrator | Saturday 12 July 2025 13:42:55 +0000 (0:00:04.289) 0:00:35.739 ********* 2025-07-12 13:45:24.054063 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 13:45:24.054077 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 13:45:24.054088 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 13:45:24.054099 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 13:45:24.054110 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 13:45:24.054121 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 13:45:24.054133 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 13:45:24.054144 | orchestrator | 2025-07-12 13:45:24.054153 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-07-12 13:45:24.054163 | orchestrator | Saturday 12 July 2025 13:42:59 +0000 (0:00:03.394) 0:00:39.133 ********* 2025-07-12 13:45:24.054172 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:45:24.054182 | orchestrator | changed: [testbed-manager] 2025-07-12 13:45:24.054191 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:45:24.054201 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:45:24.054210 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:45:24.054224 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:45:24.054234 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:45:24.054243 | orchestrator | 2025-07-12 13:45:24.054253 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-07-12 13:45:24.054262 | orchestrator | Saturday 12 July 2025 13:43:03 +0000 (0:00:04.119) 0:00:43.252 ********* 2025-07-12 13:45:24.054272 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:24.054283 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.054299 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:24.054316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.054327 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:24.054337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.054351 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.054363 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.054373 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.054392 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:24.054408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.054418 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:24.054428 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.054438 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.054452 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:24.054462 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.054478 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:24.054488 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:24.054508 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.054518 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.054528 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.054538 | orchestrator | 2025-07-12 13:45:24.054548 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-07-12 13:45:24.054557 | orchestrator | Saturday 12 July 2025 13:43:05 +0000 (0:00:02.469) 0:00:45.722 ********* 2025-07-12 13:45:24.054567 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 13:45:24.054576 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 13:45:24.054586 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 13:45:24.054596 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 13:45:24.054605 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 13:45:24.054614 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 13:45:24.054624 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 13:45:24.054633 | orchestrator | 2025-07-12 13:45:24.054647 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-07-12 13:45:24.054662 | orchestrator | Saturday 12 July 2025 13:43:08 +0000 (0:00:02.308) 0:00:48.030 ********* 2025-07-12 13:45:24.054672 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 13:45:24.054681 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 13:45:24.054691 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 13:45:24.054700 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 13:45:24.054709 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 13:45:24.054719 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 13:45:24.054728 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 13:45:24.054738 | orchestrator | 2025-07-12 13:45:24.054747 | orchestrator | TASK [common : Check common containers] **************************************** 2025-07-12 13:45:24.054757 | orchestrator | Saturday 12 July 2025 13:43:11 +0000 (0:00:02.894) 0:00:50.925 ********* 2025-07-12 13:45:24.054767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:24.054777 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:24.054792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:24.054803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:24.054813 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:24.054832 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.054842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.054852 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:24.054862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.054877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.054888 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:24.054898 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.054917 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.054927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.054938 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.054948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.054958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.054974 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.054985 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.054995 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.055010 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:24.055019 | orchestrator | 2025-07-12 13:45:24.055081 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-07-12 13:45:24.055092 | orchestrator | Saturday 12 July 2025 13:43:14 +0000 (0:00:03.128) 0:00:54.053 ********* 2025-07-12 13:45:24.055102 | orchestrator | changed: [testbed-manager] 2025-07-12 13:45:24.055111 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:45:24.055121 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:45:24.055131 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:45:24.055140 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:45:24.055150 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:45:24.055159 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:45:24.055169 | orchestrator | 2025-07-12 13:45:24.055178 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-07-12 13:45:24.055188 | orchestrator | Saturday 12 July 2025 13:43:15 +0000 (0:00:01.725) 0:00:55.779 ********* 2025-07-12 13:45:24.055197 | orchestrator | changed: [testbed-manager] 2025-07-12 13:45:24.055207 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:45:24.055216 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:45:24.055226 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:45:24.055235 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:45:24.055245 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:45:24.055254 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:45:24.055263 | orchestrator | 2025-07-12 13:45:24.055273 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 13:45:24.055283 | orchestrator | Saturday 12 July 2025 13:43:17 +0000 (0:00:01.334) 0:00:57.114 ********* 2025-07-12 13:45:24.055292 | orchestrator | 2025-07-12 13:45:24.055302 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 13:45:24.055311 | orchestrator | Saturday 12 July 2025 13:43:17 +0000 (0:00:00.214) 0:00:57.328 ********* 2025-07-12 13:45:24.055320 | orchestrator | 2025-07-12 13:45:24.055330 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 13:45:24.055340 | orchestrator | Saturday 12 July 2025 13:43:17 +0000 (0:00:00.101) 0:00:57.430 ********* 2025-07-12 13:45:24.055349 | orchestrator | 2025-07-12 13:45:24.055359 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 13:45:24.055368 | orchestrator | Saturday 12 July 2025 13:43:17 +0000 (0:00:00.095) 0:00:57.525 ********* 2025-07-12 13:45:24.055378 | orchestrator | 2025-07-12 13:45:24.055387 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 13:45:24.055397 | orchestrator | Saturday 12 July 2025 13:43:17 +0000 (0:00:00.064) 0:00:57.590 ********* 2025-07-12 13:45:24.055406 | orchestrator | 2025-07-12 13:45:24.055416 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 13:45:24.055425 | orchestrator | Saturday 12 July 2025 13:43:17 +0000 (0:00:00.062) 0:00:57.653 ********* 2025-07-12 13:45:24.055435 | orchestrator | 2025-07-12 13:45:24.055450 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 13:45:24.055459 | orchestrator | Saturday 12 July 2025 13:43:17 +0000 (0:00:00.059) 0:00:57.712 ********* 2025-07-12 13:45:24.055469 | orchestrator | 2025-07-12 13:45:24.055478 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-07-12 13:45:24.055494 | orchestrator | Saturday 12 July 2025 13:43:17 +0000 (0:00:00.092) 0:00:57.805 ********* 2025-07-12 13:45:24.055509 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:45:24.055519 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:45:24.055528 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:45:24.055538 | orchestrator | changed: [testbed-manager] 2025-07-12 13:45:24.055548 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:45:24.055557 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:45:24.055567 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:45:24.055576 | orchestrator | 2025-07-12 13:45:24.055586 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-07-12 13:45:24.055595 | orchestrator | Saturday 12 July 2025 13:44:00 +0000 (0:00:42.758) 0:01:40.564 ********* 2025-07-12 13:45:24.055605 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:45:24.055614 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:45:24.055624 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:45:24.055631 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:45:24.055639 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:45:24.055647 | orchestrator | changed: [testbed-manager] 2025-07-12 13:45:24.055654 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:45:24.055662 | orchestrator | 2025-07-12 13:45:24.055670 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-07-12 13:45:24.055678 | orchestrator | Saturday 12 July 2025 13:45:09 +0000 (0:01:08.928) 0:02:49.492 ********* 2025-07-12 13:45:24.055686 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:45:24.055693 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:45:24.055701 | orchestrator | ok: [testbed-manager] 2025-07-12 13:45:24.055709 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:45:24.055716 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:45:24.055724 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:45:24.055732 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:45:24.055740 | orchestrator | 2025-07-12 13:45:24.055748 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-07-12 13:45:24.055756 | orchestrator | Saturday 12 July 2025 13:45:12 +0000 (0:00:02.532) 0:02:52.025 ********* 2025-07-12 13:45:24.055763 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:45:24.055771 | orchestrator | changed: [testbed-manager] 2025-07-12 13:45:24.055779 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:45:24.055787 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:45:24.055794 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:45:24.055802 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:45:24.055809 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:45:24.055817 | orchestrator | 2025-07-12 13:45:24.055825 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:45:24.055834 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 13:45:24.055842 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 13:45:24.055850 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 13:45:24.055862 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 13:45:24.055870 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 13:45:24.055878 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 13:45:24.055885 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 13:45:24.055899 | orchestrator | 2025-07-12 13:45:24.055907 | orchestrator | 2025-07-12 13:45:24.055915 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:45:24.055923 | orchestrator | Saturday 12 July 2025 13:45:22 +0000 (0:00:10.220) 0:03:02.245 ********* 2025-07-12 13:45:24.055931 | orchestrator | =============================================================================== 2025-07-12 13:45:24.055938 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 68.93s 2025-07-12 13:45:24.055946 | orchestrator | common : Restart fluentd container ------------------------------------- 42.76s 2025-07-12 13:45:24.055954 | orchestrator | common : Restart cron container ---------------------------------------- 10.22s 2025-07-12 13:45:24.055962 | orchestrator | common : Copying over config.json files for services -------------------- 5.43s 2025-07-12 13:45:24.055970 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.12s 2025-07-12 13:45:24.055977 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.61s 2025-07-12 13:45:24.055985 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.29s 2025-07-12 13:45:24.055993 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 4.12s 2025-07-12 13:45:24.056000 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.39s 2025-07-12 13:45:24.056008 | orchestrator | common : Check common containers ---------------------------------------- 3.13s 2025-07-12 13:45:24.056016 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.08s 2025-07-12 13:45:24.056039 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.89s 2025-07-12 13:45:24.056048 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.53s 2025-07-12 13:45:24.056055 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.47s 2025-07-12 13:45:24.056067 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.31s 2025-07-12 13:45:24.056075 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 2.03s 2025-07-12 13:45:24.056083 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.88s 2025-07-12 13:45:24.056090 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.77s 2025-07-12 13:45:24.056098 | orchestrator | common : Creating log volume -------------------------------------------- 1.73s 2025-07-12 13:45:24.056106 | orchestrator | common : include_tasks -------------------------------------------------- 1.37s 2025-07-12 13:45:24.056114 | orchestrator | 2025-07-12 13:45:24 | INFO  | Task 2f5a327b-aeff-47e0-a30c-3a088fb4e042 is in state STARTED 2025-07-12 13:45:24.056122 | orchestrator | 2025-07-12 13:45:24 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:45:24.056130 | orchestrator | 2025-07-12 13:45:24 | INFO  | Task 1befab49-0e2b-451c-8a53-c5b6394c5856 is in state STARTED 2025-07-12 13:45:24.056138 | orchestrator | 2025-07-12 13:45:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:27.114857 | orchestrator | 2025-07-12 13:45:27 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:45:27.114954 | orchestrator | 2025-07-12 13:45:27 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:45:27.114968 | orchestrator | 2025-07-12 13:45:27 | INFO  | Task 68c1eeba-a9f0-466d-8373-515bb76e342f is in state STARTED 2025-07-12 13:45:27.114980 | orchestrator | 2025-07-12 13:45:27 | INFO  | Task 2f5a327b-aeff-47e0-a30c-3a088fb4e042 is in state STARTED 2025-07-12 13:45:27.114991 | orchestrator | 2025-07-12 13:45:27 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:45:27.115002 | orchestrator | 2025-07-12 13:45:27 | INFO  | Task 1befab49-0e2b-451c-8a53-c5b6394c5856 is in state STARTED 2025-07-12 13:45:27.115083 | orchestrator | 2025-07-12 13:45:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:30.154596 | orchestrator | 2025-07-12 13:45:30 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:45:30.155041 | orchestrator | 2025-07-12 13:45:30 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:45:30.155075 | orchestrator | 2025-07-12 13:45:30 | INFO  | Task 68c1eeba-a9f0-466d-8373-515bb76e342f is in state STARTED 2025-07-12 13:45:30.155782 | orchestrator | 2025-07-12 13:45:30 | INFO  | Task 2f5a327b-aeff-47e0-a30c-3a088fb4e042 is in state STARTED 2025-07-12 13:45:30.157762 | orchestrator | 2025-07-12 13:45:30 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:45:30.157785 | orchestrator | 2025-07-12 13:45:30 | INFO  | Task 1befab49-0e2b-451c-8a53-c5b6394c5856 is in state STARTED 2025-07-12 13:45:30.157797 | orchestrator | 2025-07-12 13:45:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:33.208911 | orchestrator | 2025-07-12 13:45:33 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:45:33.209173 | orchestrator | 2025-07-12 13:45:33 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:45:33.209268 | orchestrator | 2025-07-12 13:45:33 | INFO  | Task 68c1eeba-a9f0-466d-8373-515bb76e342f is in state STARTED 2025-07-12 13:45:33.209286 | orchestrator | 2025-07-12 13:45:33 | INFO  | Task 2f5a327b-aeff-47e0-a30c-3a088fb4e042 is in state STARTED 2025-07-12 13:45:33.209298 | orchestrator | 2025-07-12 13:45:33 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:45:33.209323 | orchestrator | 2025-07-12 13:45:33 | INFO  | Task 1befab49-0e2b-451c-8a53-c5b6394c5856 is in state STARTED 2025-07-12 13:45:33.209335 | orchestrator | 2025-07-12 13:45:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:36.291978 | orchestrator | 2025-07-12 13:45:36 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:45:36.292133 | orchestrator | 2025-07-12 13:45:36 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:45:36.292149 | orchestrator | 2025-07-12 13:45:36 | INFO  | Task 68c1eeba-a9f0-466d-8373-515bb76e342f is in state STARTED 2025-07-12 13:45:36.292161 | orchestrator | 2025-07-12 13:45:36 | INFO  | Task 2f5a327b-aeff-47e0-a30c-3a088fb4e042 is in state STARTED 2025-07-12 13:45:36.292972 | orchestrator | 2025-07-12 13:45:36 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:45:36.294068 | orchestrator | 2025-07-12 13:45:36 | INFO  | Task 1befab49-0e2b-451c-8a53-c5b6394c5856 is in state STARTED 2025-07-12 13:45:36.294092 | orchestrator | 2025-07-12 13:45:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:39.337219 | orchestrator | 2025-07-12 13:45:39 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:45:39.337806 | orchestrator | 2025-07-12 13:45:39 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:45:39.339464 | orchestrator | 2025-07-12 13:45:39 | INFO  | Task 68c1eeba-a9f0-466d-8373-515bb76e342f is in state STARTED 2025-07-12 13:45:39.341420 | orchestrator | 2025-07-12 13:45:39 | INFO  | Task 2f5a327b-aeff-47e0-a30c-3a088fb4e042 is in state STARTED 2025-07-12 13:45:39.343082 | orchestrator | 2025-07-12 13:45:39 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:45:39.344221 | orchestrator | 2025-07-12 13:45:39 | INFO  | Task 1befab49-0e2b-451c-8a53-c5b6394c5856 is in state STARTED 2025-07-12 13:45:39.344456 | orchestrator | 2025-07-12 13:45:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:42.380905 | orchestrator | 2025-07-12 13:45:42 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:45:42.381674 | orchestrator | 2025-07-12 13:45:42 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:45:42.382533 | orchestrator | 2025-07-12 13:45:42 | INFO  | Task 68c1eeba-a9f0-466d-8373-515bb76e342f is in state SUCCESS 2025-07-12 13:45:42.383411 | orchestrator | 2025-07-12 13:45:42 | INFO  | Task 2f5a327b-aeff-47e0-a30c-3a088fb4e042 is in state STARTED 2025-07-12 13:45:42.384206 | orchestrator | 2025-07-12 13:45:42 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:45:42.385160 | orchestrator | 2025-07-12 13:45:42 | INFO  | Task 1befab49-0e2b-451c-8a53-c5b6394c5856 is in state STARTED 2025-07-12 13:45:42.385184 | orchestrator | 2025-07-12 13:45:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:45.438572 | orchestrator | 2025-07-12 13:45:45 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:45:45.440135 | orchestrator | 2025-07-12 13:45:45 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:45:45.441606 | orchestrator | 2025-07-12 13:45:45 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:45:45.443128 | orchestrator | 2025-07-12 13:45:45 | INFO  | Task 2f5a327b-aeff-47e0-a30c-3a088fb4e042 is in state STARTED 2025-07-12 13:45:45.444459 | orchestrator | 2025-07-12 13:45:45 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:45:45.444490 | orchestrator | 2025-07-12 13:45:45 | INFO  | Task 1befab49-0e2b-451c-8a53-c5b6394c5856 is in state STARTED 2025-07-12 13:45:45.444502 | orchestrator | 2025-07-12 13:45:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:48.470723 | orchestrator | 2025-07-12 13:45:48 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:45:48.471402 | orchestrator | 2025-07-12 13:45:48 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:45:48.472159 | orchestrator | 2025-07-12 13:45:48 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:45:48.473419 | orchestrator | 2025-07-12 13:45:48 | INFO  | Task 2f5a327b-aeff-47e0-a30c-3a088fb4e042 is in state STARTED 2025-07-12 13:45:48.474133 | orchestrator | 2025-07-12 13:45:48 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:45:48.474947 | orchestrator | 2025-07-12 13:45:48 | INFO  | Task 1befab49-0e2b-451c-8a53-c5b6394c5856 is in state STARTED 2025-07-12 13:45:48.474970 | orchestrator | 2025-07-12 13:45:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:51.531613 | orchestrator | 2025-07-12 13:45:51 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:45:51.534486 | orchestrator | 2025-07-12 13:45:51 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:45:51.534518 | orchestrator | 2025-07-12 13:45:51 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:45:51.534530 | orchestrator | 2025-07-12 13:45:51 | INFO  | Task 2f5a327b-aeff-47e0-a30c-3a088fb4e042 is in state STARTED 2025-07-12 13:45:51.534541 | orchestrator | 2025-07-12 13:45:51 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:45:51.534573 | orchestrator | 2025-07-12 13:45:51 | INFO  | Task 1befab49-0e2b-451c-8a53-c5b6394c5856 is in state STARTED 2025-07-12 13:45:51.534584 | orchestrator | 2025-07-12 13:45:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:54.585696 | orchestrator | 2025-07-12 13:45:54 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:45:54.586924 | orchestrator | 2025-07-12 13:45:54 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:45:54.587817 | orchestrator | 2025-07-12 13:45:54 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:45:54.589504 | orchestrator | 2025-07-12 13:45:54 | INFO  | Task 2f5a327b-aeff-47e0-a30c-3a088fb4e042 is in state STARTED 2025-07-12 13:45:54.590239 | orchestrator | 2025-07-12 13:45:54 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:45:54.591198 | orchestrator | 2025-07-12 13:45:54 | INFO  | Task 1befab49-0e2b-451c-8a53-c5b6394c5856 is in state STARTED 2025-07-12 13:45:54.591221 | orchestrator | 2025-07-12 13:45:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:57.619744 | orchestrator | 2025-07-12 13:45:57 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:45:57.621156 | orchestrator | 2025-07-12 13:45:57 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:45:57.623168 | orchestrator | 2025-07-12 13:45:57 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:45:57.625396 | orchestrator | 2025-07-12 13:45:57 | INFO  | Task 2f5a327b-aeff-47e0-a30c-3a088fb4e042 is in state STARTED 2025-07-12 13:45:57.626670 | orchestrator | 2025-07-12 13:45:57 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:45:57.627445 | orchestrator | 2025-07-12 13:45:57 | INFO  | Task 1befab49-0e2b-451c-8a53-c5b6394c5856 is in state STARTED 2025-07-12 13:45:57.627475 | orchestrator | 2025-07-12 13:45:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:00.661270 | orchestrator | 2025-07-12 13:46:00 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:46:00.661621 | orchestrator | 2025-07-12 13:46:00 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:46:00.662088 | orchestrator | 2025-07-12 13:46:00 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:46:00.662853 | orchestrator | 2025-07-12 13:46:00 | INFO  | Task 2f5a327b-aeff-47e0-a30c-3a088fb4e042 is in state STARTED 2025-07-12 13:46:00.663521 | orchestrator | 2025-07-12 13:46:00 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:46:00.664100 | orchestrator | 2025-07-12 13:46:00 | INFO  | Task 1befab49-0e2b-451c-8a53-c5b6394c5856 is in state SUCCESS 2025-07-12 13:46:00.665198 | orchestrator | 2025-07-12 13:46:00.665227 | orchestrator | 2025-07-12 13:46:00.665239 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 13:46:00.665251 | orchestrator | 2025-07-12 13:46:00.665263 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 13:46:00.665274 | orchestrator | Saturday 12 July 2025 13:45:29 +0000 (0:00:00.443) 0:00:00.443 ********* 2025-07-12 13:46:00.665286 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:00.665297 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:00.665308 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:00.665319 | orchestrator | 2025-07-12 13:46:00.665330 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 13:46:00.665341 | orchestrator | Saturday 12 July 2025 13:45:29 +0000 (0:00:00.385) 0:00:00.829 ********* 2025-07-12 13:46:00.665352 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-07-12 13:46:00.665363 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-07-12 13:46:00.665374 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-07-12 13:46:00.665408 | orchestrator | 2025-07-12 13:46:00.665420 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-07-12 13:46:00.665431 | orchestrator | 2025-07-12 13:46:00.665441 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-07-12 13:46:00.665452 | orchestrator | Saturday 12 July 2025 13:45:30 +0000 (0:00:00.563) 0:00:01.392 ********* 2025-07-12 13:46:00.665463 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:46:00.665475 | orchestrator | 2025-07-12 13:46:00.665486 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-07-12 13:46:00.665497 | orchestrator | Saturday 12 July 2025 13:45:31 +0000 (0:00:00.603) 0:00:01.995 ********* 2025-07-12 13:46:00.665507 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-07-12 13:46:00.665518 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-07-12 13:46:00.665529 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-07-12 13:46:00.665539 | orchestrator | 2025-07-12 13:46:00.665550 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-07-12 13:46:00.665560 | orchestrator | Saturday 12 July 2025 13:45:31 +0000 (0:00:00.726) 0:00:02.722 ********* 2025-07-12 13:46:00.665571 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-07-12 13:46:00.665582 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-07-12 13:46:00.665592 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-07-12 13:46:00.665603 | orchestrator | 2025-07-12 13:46:00.665613 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-07-12 13:46:00.665624 | orchestrator | Saturday 12 July 2025 13:45:34 +0000 (0:00:02.447) 0:00:05.170 ********* 2025-07-12 13:46:00.665634 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:00.665645 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:00.665656 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:00.665666 | orchestrator | 2025-07-12 13:46:00.665677 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-07-12 13:46:00.665687 | orchestrator | Saturday 12 July 2025 13:45:36 +0000 (0:00:02.553) 0:00:07.723 ********* 2025-07-12 13:46:00.665698 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:00.665708 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:00.665719 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:00.665729 | orchestrator | 2025-07-12 13:46:00.665740 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:46:00.665751 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:46:00.665764 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:46:00.665775 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:46:00.665786 | orchestrator | 2025-07-12 13:46:00.665796 | orchestrator | 2025-07-12 13:46:00.665811 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:46:00.665824 | orchestrator | Saturday 12 July 2025 13:45:40 +0000 (0:00:03.301) 0:00:11.024 ********* 2025-07-12 13:46:00.665835 | orchestrator | =============================================================================== 2025-07-12 13:46:00.665848 | orchestrator | memcached : Restart memcached container --------------------------------- 3.30s 2025-07-12 13:46:00.665861 | orchestrator | memcached : Check memcached container ----------------------------------- 2.55s 2025-07-12 13:46:00.665874 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.45s 2025-07-12 13:46:00.665886 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.73s 2025-07-12 13:46:00.665898 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.60s 2025-07-12 13:46:00.665919 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.56s 2025-07-12 13:46:00.665931 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.39s 2025-07-12 13:46:00.665943 | orchestrator | 2025-07-12 13:46:00.665955 | orchestrator | 2025-07-12 13:46:00.666008 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 13:46:00.666079 | orchestrator | 2025-07-12 13:46:00.666093 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 13:46:00.666114 | orchestrator | Saturday 12 July 2025 13:45:28 +0000 (0:00:00.701) 0:00:00.701 ********* 2025-07-12 13:46:00.666126 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:00.666139 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:00.666151 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:00.666163 | orchestrator | 2025-07-12 13:46:00.666174 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 13:46:00.666198 | orchestrator | Saturday 12 July 2025 13:45:28 +0000 (0:00:00.388) 0:00:01.090 ********* 2025-07-12 13:46:00.666209 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-07-12 13:46:00.666220 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-07-12 13:46:00.666230 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-07-12 13:46:00.666241 | orchestrator | 2025-07-12 13:46:00.666251 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-07-12 13:46:00.666262 | orchestrator | 2025-07-12 13:46:00.666272 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-07-12 13:46:00.666283 | orchestrator | Saturday 12 July 2025 13:45:29 +0000 (0:00:00.456) 0:00:01.547 ********* 2025-07-12 13:46:00.666294 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:46:00.666305 | orchestrator | 2025-07-12 13:46:00.666315 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-07-12 13:46:00.666326 | orchestrator | Saturday 12 July 2025 13:45:29 +0000 (0:00:00.509) 0:00:02.056 ********* 2025-07-12 13:46:00.666340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 13:46:00.666357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 13:46:00.666369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 13:46:00.666381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 13:46:00.666407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 13:46:00.666429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 13:46:00.666441 | orchestrator | 2025-07-12 13:46:00.666452 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-07-12 13:46:00.666463 | orchestrator | Saturday 12 July 2025 13:45:31 +0000 (0:00:01.434) 0:00:03.491 ********* 2025-07-12 13:46:00.666474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 13:46:00.666486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 13:46:00.666497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 13:46:00.666514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 13:46:00.666526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 13:46:00.666590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 13:46:00.666604 | orchestrator | 2025-07-12 13:46:00.666615 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-07-12 13:46:00.666626 | orchestrator | Saturday 12 July 2025 13:45:34 +0000 (0:00:03.537) 0:00:07.028 ********* 2025-07-12 13:46:00.666637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 13:46:00.666649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 13:46:00.666660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 13:46:00.666679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 13:46:00.666690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 13:46:00.666713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 13:46:00.666726 | orchestrator | 2025-07-12 13:46:00.666737 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-07-12 13:46:00.666748 | orchestrator | Saturday 12 July 2025 13:45:38 +0000 (0:00:03.323) 0:00:10.352 ********* 2025-07-12 13:46:00.666759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 13:46:00.666771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 13:46:00.666782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 13:46:00.666800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 13:46:00.666811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 13:46:00.666829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 13:46:00.666841 | orchestrator | 2025-07-12 13:46:00.666852 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-07-12 13:46:00.666863 | orchestrator | Saturday 12 July 2025 13:45:40 +0000 (0:00:02.099) 0:00:12.451 ********* 2025-07-12 13:46:00.666874 | orchestrator | 2025-07-12 13:46:00.666885 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-07-12 13:46:00.666896 | orchestrator | Saturday 12 July 2025 13:45:40 +0000 (0:00:00.083) 0:00:12.535 ********* 2025-07-12 13:46:00.666907 | orchestrator | 2025-07-12 13:46:00.666917 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-07-12 13:46:00.666928 | orchestrator | Saturday 12 July 2025 13:45:40 +0000 (0:00:00.078) 0:00:12.613 ********* 2025-07-12 13:46:00.666939 | orchestrator | 2025-07-12 13:46:00.666950 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-07-12 13:46:00.666961 | orchestrator | Saturday 12 July 2025 13:45:40 +0000 (0:00:00.082) 0:00:12.695 ********* 2025-07-12 13:46:00.667007 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:00.667018 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:00.667029 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:00.667040 | orchestrator | 2025-07-12 13:46:00.667051 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-07-12 13:46:00.667062 | orchestrator | Saturday 12 July 2025 13:45:48 +0000 (0:00:08.042) 0:00:20.738 ********* 2025-07-12 13:46:00.667072 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:00.667083 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:00.667101 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:00.667112 | orchestrator | 2025-07-12 13:46:00.667123 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:46:00.667134 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:46:00.667145 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:46:00.667156 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:46:00.667167 | orchestrator | 2025-07-12 13:46:00.667177 | orchestrator | 2025-07-12 13:46:00.667188 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:46:00.667199 | orchestrator | Saturday 12 July 2025 13:45:57 +0000 (0:00:08.805) 0:00:29.543 ********* 2025-07-12 13:46:00.667210 | orchestrator | =============================================================================== 2025-07-12 13:46:00.667220 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.81s 2025-07-12 13:46:00.667231 | orchestrator | redis : Restart redis container ----------------------------------------- 8.04s 2025-07-12 13:46:00.667242 | orchestrator | redis : Copying over default config.json files -------------------------- 3.54s 2025-07-12 13:46:00.667252 | orchestrator | redis : Copying over redis config files --------------------------------- 3.32s 2025-07-12 13:46:00.667263 | orchestrator | redis : Check redis containers ------------------------------------------ 2.10s 2025-07-12 13:46:00.667274 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.43s 2025-07-12 13:46:00.667284 | orchestrator | redis : include_tasks --------------------------------------------------- 0.51s 2025-07-12 13:46:00.667295 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2025-07-12 13:46:00.667306 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.39s 2025-07-12 13:46:00.667316 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.24s 2025-07-12 13:46:00.667327 | orchestrator | 2025-07-12 13:46:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:03.690445 | orchestrator | 2025-07-12 13:46:03 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:46:03.690645 | orchestrator | 2025-07-12 13:46:03 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:46:03.690677 | orchestrator | 2025-07-12 13:46:03 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:46:03.692144 | orchestrator | 2025-07-12 13:46:03 | INFO  | Task 2f5a327b-aeff-47e0-a30c-3a088fb4e042 is in state STARTED 2025-07-12 13:46:03.692713 | orchestrator | 2025-07-12 13:46:03 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:46:03.693437 | orchestrator | 2025-07-12 13:46:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:06.732703 | orchestrator | 2025-07-12 13:46:06 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:46:06.732938 | orchestrator | 2025-07-12 13:46:06 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:46:06.733635 | orchestrator | 2025-07-12 13:46:06 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:46:06.734168 | orchestrator | 2025-07-12 13:46:06 | INFO  | Task 2f5a327b-aeff-47e0-a30c-3a088fb4e042 is in state STARTED 2025-07-12 13:46:06.735129 | orchestrator | 2025-07-12 13:46:06 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:46:06.735152 | orchestrator | 2025-07-12 13:46:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:09.763176 | orchestrator | 2025-07-12 13:46:09 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:46:09.763283 | orchestrator | 2025-07-12 13:46:09 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state STARTED 2025-07-12 13:46:09.763870 | orchestrator | 2025-07-12 13:46:09 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:46:09.766155 | orchestrator | 2025-07-12 13:46:09 | INFO  | Task 2f5a327b-aeff-47e0-a30c-3a088fb4e042 is in state STARTED 2025-07-12 13:46:09.768065 | orchestrator | 2025-07-12 13:46:09 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:46:09.768089 | orchestrator | 2025-07-12 13:46:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:12.796147 | orchestrator | 2025-07-12 13:46:12 | INFO  | Task c93a338b-2385-4a27-b500-0b6e9a6c32e1 is in state STARTED 2025-07-12 13:46:12.797156 | orchestrator | 2025-07-12 13:46:12 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:46:12.798778 | orchestrator | 2025-07-12 13:46:12 | INFO  | Task b3c48685-0f4b-4504-b2ac-aba57040bdcd is in state SUCCESS 2025-07-12 13:46:12.800332 | orchestrator | 2025-07-12 13:46:12.800370 | orchestrator | 2025-07-12 13:46:12.800382 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-07-12 13:46:12.800394 | orchestrator | 2025-07-12 13:46:12.800406 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-07-12 13:46:12.800422 | orchestrator | Saturday 12 July 2025 13:42:20 +0000 (0:00:00.172) 0:00:00.172 ********* 2025-07-12 13:46:12.800434 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:46:12.800446 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:46:12.800457 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:46:12.800467 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:12.800478 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:12.800489 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:12.800500 | orchestrator | 2025-07-12 13:46:12.800511 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-07-12 13:46:12.800522 | orchestrator | Saturday 12 July 2025 13:42:21 +0000 (0:00:00.691) 0:00:00.863 ********* 2025-07-12 13:46:12.800533 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:12.800544 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:12.800555 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:12.800565 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.800576 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:12.800587 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:12.800597 | orchestrator | 2025-07-12 13:46:12.800608 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-07-12 13:46:12.800619 | orchestrator | Saturday 12 July 2025 13:42:21 +0000 (0:00:00.661) 0:00:01.525 ********* 2025-07-12 13:46:12.800630 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:12.800641 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:12.800651 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:12.800662 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.800673 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:12.800683 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:12.800694 | orchestrator | 2025-07-12 13:46:12.800705 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-07-12 13:46:12.800716 | orchestrator | Saturday 12 July 2025 13:42:22 +0000 (0:00:00.691) 0:00:02.216 ********* 2025-07-12 13:46:12.800727 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:46:12.800737 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:46:12.800748 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:46:12.800758 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:12.800769 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:12.800780 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:12.800791 | orchestrator | 2025-07-12 13:46:12.800825 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-07-12 13:46:12.800837 | orchestrator | Saturday 12 July 2025 13:42:25 +0000 (0:00:03.205) 0:00:05.422 ********* 2025-07-12 13:46:12.800848 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:46:12.800858 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:46:12.800869 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:46:12.800879 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:12.800890 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:12.800900 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:12.800911 | orchestrator | 2025-07-12 13:46:12.800923 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-07-12 13:46:12.800935 | orchestrator | Saturday 12 July 2025 13:42:27 +0000 (0:00:01.339) 0:00:06.761 ********* 2025-07-12 13:46:12.800972 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:46:12.800993 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:46:12.801014 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:12.801034 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:12.801046 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:12.801058 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:46:12.801070 | orchestrator | 2025-07-12 13:46:12.801083 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-07-12 13:46:12.801218 | orchestrator | Saturday 12 July 2025 13:42:28 +0000 (0:00:01.693) 0:00:08.455 ********* 2025-07-12 13:46:12.801233 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:12.801245 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:12.801272 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:12.801285 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.801297 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:12.801308 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:12.801319 | orchestrator | 2025-07-12 13:46:12.801330 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-07-12 13:46:12.801340 | orchestrator | Saturday 12 July 2025 13:42:30 +0000 (0:00:01.195) 0:00:09.650 ********* 2025-07-12 13:46:12.801351 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:12.801362 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:12.801372 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:12.801383 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.801393 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:12.801471 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:12.801483 | orchestrator | 2025-07-12 13:46:12.801493 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-07-12 13:46:12.801504 | orchestrator | Saturday 12 July 2025 13:42:30 +0000 (0:00:00.769) 0:00:10.420 ********* 2025-07-12 13:46:12.801515 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 13:46:12.801526 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 13:46:12.801536 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:12.801547 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 13:46:12.801558 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 13:46:12.801568 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:12.801579 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 13:46:12.801589 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 13:46:12.801600 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:12.801611 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 13:46:12.801637 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 13:46:12.801648 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.801659 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 13:46:12.801683 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 13:46:12.801694 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:12.801705 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 13:46:12.801715 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 13:46:12.801726 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:12.801736 | orchestrator | 2025-07-12 13:46:12.801747 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-07-12 13:46:12.801758 | orchestrator | Saturday 12 July 2025 13:42:31 +0000 (0:00:00.949) 0:00:11.369 ********* 2025-07-12 13:46:12.801768 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:12.801779 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:12.801790 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:12.801800 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.801811 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:12.801821 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:12.801832 | orchestrator | 2025-07-12 13:46:12.801842 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-07-12 13:46:12.801854 | orchestrator | Saturday 12 July 2025 13:42:32 +0000 (0:00:01.144) 0:00:12.513 ********* 2025-07-12 13:46:12.801865 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:46:12.801875 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:46:12.801886 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:46:12.801896 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:12.801907 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:12.801917 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:12.801928 | orchestrator | 2025-07-12 13:46:12.801939 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-07-12 13:46:12.801984 | orchestrator | Saturday 12 July 2025 13:42:33 +0000 (0:00:00.907) 0:00:13.421 ********* 2025-07-12 13:46:12.801997 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:46:12.802007 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:46:12.802063 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:46:12.802078 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:12.802088 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:12.802099 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:12.802110 | orchestrator | 2025-07-12 13:46:12.802121 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-07-12 13:46:12.802131 | orchestrator | Saturday 12 July 2025 13:42:40 +0000 (0:00:06.336) 0:00:19.757 ********* 2025-07-12 13:46:12.802142 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:12.802153 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:12.802163 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:12.802174 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.802184 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:12.802195 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:12.802205 | orchestrator | 2025-07-12 13:46:12.802216 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-07-12 13:46:12.802227 | orchestrator | Saturday 12 July 2025 13:42:41 +0000 (0:00:01.359) 0:00:21.117 ********* 2025-07-12 13:46:12.802237 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:12.802248 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:12.802258 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:12.802269 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.802279 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:12.802290 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:12.802300 | orchestrator | 2025-07-12 13:46:12.802311 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-07-12 13:46:12.802323 | orchestrator | Saturday 12 July 2025 13:42:43 +0000 (0:00:01.858) 0:00:22.976 ********* 2025-07-12 13:46:12.802341 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:46:12.802363 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:46:12.802374 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:46:12.802385 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:12.802395 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:12.802406 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:12.802416 | orchestrator | 2025-07-12 13:46:12.802427 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-07-12 13:46:12.802438 | orchestrator | Saturday 12 July 2025 13:42:44 +0000 (0:00:00.976) 0:00:23.952 ********* 2025-07-12 13:46:12.802448 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-07-12 13:46:12.802459 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-07-12 13:46:12.802470 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-07-12 13:46:12.802481 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-07-12 13:46:12.802491 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-07-12 13:46:12.802502 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-07-12 13:46:12.802512 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-07-12 13:46:12.802523 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-07-12 13:46:12.802533 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-07-12 13:46:12.802544 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-07-12 13:46:12.802555 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-07-12 13:46:12.802565 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-07-12 13:46:12.802576 | orchestrator | 2025-07-12 13:46:12.802587 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-07-12 13:46:12.802597 | orchestrator | Saturday 12 July 2025 13:42:46 +0000 (0:00:01.768) 0:00:25.721 ********* 2025-07-12 13:46:12.802608 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:46:12.802742 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:46:12.802753 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:46:12.802764 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:12.802774 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:12.802785 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:12.802796 | orchestrator | 2025-07-12 13:46:12.802817 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-07-12 13:46:12.802828 | orchestrator | 2025-07-12 13:46:12.802839 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-07-12 13:46:12.802850 | orchestrator | Saturday 12 July 2025 13:42:48 +0000 (0:00:02.792) 0:00:28.513 ********* 2025-07-12 13:46:12.802861 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:12.802871 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:12.802882 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:12.802892 | orchestrator | 2025-07-12 13:46:12.802903 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-07-12 13:46:12.802914 | orchestrator | Saturday 12 July 2025 13:42:50 +0000 (0:00:01.164) 0:00:29.678 ********* 2025-07-12 13:46:12.802924 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:12.802935 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:12.803014 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:12.803028 | orchestrator | 2025-07-12 13:46:12.803038 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-07-12 13:46:12.803049 | orchestrator | Saturday 12 July 2025 13:42:51 +0000 (0:00:01.389) 0:00:31.067 ********* 2025-07-12 13:46:12.803060 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:12.803070 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:12.803081 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:12.803091 | orchestrator | 2025-07-12 13:46:12.803102 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-07-12 13:46:12.803113 | orchestrator | Saturday 12 July 2025 13:42:52 +0000 (0:00:01.232) 0:00:32.299 ********* 2025-07-12 13:46:12.803123 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:12.803134 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:12.803145 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:12.803164 | orchestrator | 2025-07-12 13:46:12.803175 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-07-12 13:46:12.803185 | orchestrator | Saturday 12 July 2025 13:42:53 +0000 (0:00:00.841) 0:00:33.141 ********* 2025-07-12 13:46:12.803196 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.803206 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:12.803217 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:12.803228 | orchestrator | 2025-07-12 13:46:12.803239 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-07-12 13:46:12.803249 | orchestrator | Saturday 12 July 2025 13:42:53 +0000 (0:00:00.439) 0:00:33.581 ********* 2025-07-12 13:46:12.803355 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:12.803369 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:12.803378 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:12.803388 | orchestrator | 2025-07-12 13:46:12.803397 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-07-12 13:46:12.803407 | orchestrator | Saturday 12 July 2025 13:42:54 +0000 (0:00:00.898) 0:00:34.479 ********* 2025-07-12 13:46:12.803416 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:12.803426 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:12.803435 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:12.803445 | orchestrator | 2025-07-12 13:46:12.803455 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-07-12 13:46:12.803464 | orchestrator | Saturday 12 July 2025 13:42:56 +0000 (0:00:01.578) 0:00:36.057 ********* 2025-07-12 13:46:12.803473 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:46:12.803483 | orchestrator | 2025-07-12 13:46:12.803492 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-07-12 13:46:12.803502 | orchestrator | Saturday 12 July 2025 13:42:57 +0000 (0:00:00.574) 0:00:36.632 ********* 2025-07-12 13:46:12.803511 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:12.803521 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:12.803530 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:12.803540 | orchestrator | 2025-07-12 13:46:12.803549 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-07-12 13:46:12.803559 | orchestrator | Saturday 12 July 2025 13:42:59 +0000 (0:00:02.290) 0:00:38.922 ********* 2025-07-12 13:46:12.803568 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:12.803584 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:12.803594 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:12.803605 | orchestrator | 2025-07-12 13:46:12.803615 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-07-12 13:46:12.803626 | orchestrator | Saturday 12 July 2025 13:43:00 +0000 (0:00:01.129) 0:00:40.052 ********* 2025-07-12 13:46:12.803636 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:12.803647 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:12.803657 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:12.803667 | orchestrator | 2025-07-12 13:46:12.803678 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-07-12 13:46:12.803688 | orchestrator | Saturday 12 July 2025 13:43:01 +0000 (0:00:01.147) 0:00:41.200 ********* 2025-07-12 13:46:12.803698 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:12.803708 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:12.803719 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:12.803729 | orchestrator | 2025-07-12 13:46:12.803739 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-07-12 13:46:12.803750 | orchestrator | Saturday 12 July 2025 13:43:03 +0000 (0:00:01.847) 0:00:43.047 ********* 2025-07-12 13:46:12.803760 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.803770 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:12.803780 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:12.803791 | orchestrator | 2025-07-12 13:46:12.803801 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-07-12 13:46:12.803819 | orchestrator | Saturday 12 July 2025 13:43:03 +0000 (0:00:00.559) 0:00:43.607 ********* 2025-07-12 13:46:12.803829 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.803840 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:12.803850 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:12.803860 | orchestrator | 2025-07-12 13:46:12.803871 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-07-12 13:46:12.803881 | orchestrator | Saturday 12 July 2025 13:43:04 +0000 (0:00:00.570) 0:00:44.178 ********* 2025-07-12 13:46:12.803892 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:12.803902 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:12.803913 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:12.803923 | orchestrator | 2025-07-12 13:46:12.803942 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-07-12 13:46:12.803979 | orchestrator | Saturday 12 July 2025 13:43:05 +0000 (0:00:01.291) 0:00:45.469 ********* 2025-07-12 13:46:12.803989 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-07-12 13:46:12.803999 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-07-12 13:46:12.804009 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-07-12 13:46:12.804019 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-07-12 13:46:12.804028 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-07-12 13:46:12.804038 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-07-12 13:46:12.804047 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-07-12 13:46:12.804057 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-07-12 13:46:12.804066 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-07-12 13:46:12.804076 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-07-12 13:46:12.804085 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-07-12 13:46:12.804095 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-07-12 13:46:12.804104 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-07-12 13:46:12.804113 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-07-12 13:46:12.804123 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-07-12 13:46:12.804132 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:12.804142 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:12.804151 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:12.804160 | orchestrator | 2025-07-12 13:46:12.804170 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-07-12 13:46:12.804180 | orchestrator | Saturday 12 July 2025 13:44:01 +0000 (0:00:55.782) 0:01:41.252 ********* 2025-07-12 13:46:12.804189 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.804210 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:12.804221 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:12.804230 | orchestrator | 2025-07-12 13:46:12.804240 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-07-12 13:46:12.804249 | orchestrator | Saturday 12 July 2025 13:44:02 +0000 (0:00:01.210) 0:01:42.462 ********* 2025-07-12 13:46:12.804259 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:12.804268 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:12.804278 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:12.804287 | orchestrator | 2025-07-12 13:46:12.804296 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-07-12 13:46:12.804306 | orchestrator | Saturday 12 July 2025 13:44:05 +0000 (0:00:02.745) 0:01:45.207 ********* 2025-07-12 13:46:12.804315 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:12.804324 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:12.804334 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:12.804343 | orchestrator | 2025-07-12 13:46:12.804353 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-07-12 13:46:12.804362 | orchestrator | Saturday 12 July 2025 13:44:06 +0000 (0:00:01.245) 0:01:46.452 ********* 2025-07-12 13:46:12.804371 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:12.804381 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:12.804390 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:12.804399 | orchestrator | 2025-07-12 13:46:12.804409 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-07-12 13:46:12.804418 | orchestrator | Saturday 12 July 2025 13:44:32 +0000 (0:00:25.229) 0:02:11.682 ********* 2025-07-12 13:46:12.804428 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:12.804437 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:12.804446 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:12.804456 | orchestrator | 2025-07-12 13:46:12.804465 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-07-12 13:46:12.804475 | orchestrator | Saturday 12 July 2025 13:44:32 +0000 (0:00:00.879) 0:02:12.562 ********* 2025-07-12 13:46:12.804484 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:12.804493 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:12.804503 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:12.804512 | orchestrator | 2025-07-12 13:46:12.804527 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-07-12 13:46:12.804537 | orchestrator | Saturday 12 July 2025 13:44:33 +0000 (0:00:01.010) 0:02:13.572 ********* 2025-07-12 13:46:12.804547 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:12.804556 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:12.804565 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:12.804575 | orchestrator | 2025-07-12 13:46:12.804584 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-07-12 13:46:12.804594 | orchestrator | Saturday 12 July 2025 13:44:34 +0000 (0:00:00.696) 0:02:14.269 ********* 2025-07-12 13:46:12.804604 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:12.804613 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:12.804622 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:12.804632 | orchestrator | 2025-07-12 13:46:12.804716 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-07-12 13:46:12.804813 | orchestrator | Saturday 12 July 2025 13:44:35 +0000 (0:00:00.723) 0:02:14.993 ********* 2025-07-12 13:46:12.804828 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:12.804838 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:12.804847 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:12.804856 | orchestrator | 2025-07-12 13:46:12.804866 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-07-12 13:46:12.804875 | orchestrator | Saturday 12 July 2025 13:44:35 +0000 (0:00:00.344) 0:02:15.338 ********* 2025-07-12 13:46:12.804885 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:12.804894 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:12.804912 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:12.804921 | orchestrator | 2025-07-12 13:46:12.804931 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-07-12 13:46:12.804941 | orchestrator | Saturday 12 July 2025 13:44:36 +0000 (0:00:00.918) 0:02:16.257 ********* 2025-07-12 13:46:12.805012 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:12.805023 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:12.805032 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:12.805041 | orchestrator | 2025-07-12 13:46:12.805051 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-07-12 13:46:12.805060 | orchestrator | Saturday 12 July 2025 13:44:37 +0000 (0:00:00.729) 0:02:16.987 ********* 2025-07-12 13:46:12.805069 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:12.805079 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:12.805088 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:12.805097 | orchestrator | 2025-07-12 13:46:12.805106 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-07-12 13:46:12.805116 | orchestrator | Saturday 12 July 2025 13:44:38 +0000 (0:00:00.954) 0:02:17.941 ********* 2025-07-12 13:46:12.805125 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:12.805134 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:12.805144 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:12.805153 | orchestrator | 2025-07-12 13:46:12.805162 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-07-12 13:46:12.805172 | orchestrator | Saturday 12 July 2025 13:44:39 +0000 (0:00:00.884) 0:02:18.826 ********* 2025-07-12 13:46:12.805181 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.805191 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:12.805200 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:12.805209 | orchestrator | 2025-07-12 13:46:12.805219 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-07-12 13:46:12.805228 | orchestrator | Saturday 12 July 2025 13:44:39 +0000 (0:00:00.684) 0:02:19.511 ********* 2025-07-12 13:46:12.805237 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.805247 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:12.805256 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:12.805266 | orchestrator | 2025-07-12 13:46:12.805275 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-07-12 13:46:12.805285 | orchestrator | Saturday 12 July 2025 13:44:40 +0000 (0:00:00.327) 0:02:19.838 ********* 2025-07-12 13:46:12.805294 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:12.805304 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:12.805319 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:12.805328 | orchestrator | 2025-07-12 13:46:12.805338 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-07-12 13:46:12.805347 | orchestrator | Saturday 12 July 2025 13:44:41 +0000 (0:00:00.813) 0:02:20.652 ********* 2025-07-12 13:46:12.805357 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:12.805366 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:12.805376 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:12.805385 | orchestrator | 2025-07-12 13:46:12.805394 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-07-12 13:46:12.805404 | orchestrator | Saturday 12 July 2025 13:44:41 +0000 (0:00:00.656) 0:02:21.308 ********* 2025-07-12 13:46:12.805413 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-07-12 13:46:12.805423 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-07-12 13:46:12.805432 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-07-12 13:46:12.805442 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-07-12 13:46:12.805452 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-07-12 13:46:12.805467 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-07-12 13:46:12.805476 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-07-12 13:46:12.805484 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-07-12 13:46:12.805493 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-07-12 13:46:12.805508 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-07-12 13:46:12.805517 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-07-12 13:46:12.805526 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-07-12 13:46:12.805534 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-07-12 13:46:12.805543 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-07-12 13:46:12.805551 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-07-12 13:46:12.805560 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-07-12 13:46:12.805569 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-07-12 13:46:12.805577 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-07-12 13:46:12.805586 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-07-12 13:46:12.805594 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-07-12 13:46:12.805603 | orchestrator | 2025-07-12 13:46:12.805612 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-07-12 13:46:12.805620 | orchestrator | 2025-07-12 13:46:12.805629 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-07-12 13:46:12.805638 | orchestrator | Saturday 12 July 2025 13:44:44 +0000 (0:00:03.234) 0:02:24.543 ********* 2025-07-12 13:46:12.805646 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:46:12.805655 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:46:12.805663 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:46:12.805672 | orchestrator | 2025-07-12 13:46:12.805681 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-07-12 13:46:12.805689 | orchestrator | Saturday 12 July 2025 13:44:45 +0000 (0:00:00.370) 0:02:24.913 ********* 2025-07-12 13:46:12.805698 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:46:12.805706 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:46:12.805715 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:46:12.805723 | orchestrator | 2025-07-12 13:46:12.805732 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-07-12 13:46:12.805741 | orchestrator | Saturday 12 July 2025 13:44:45 +0000 (0:00:00.703) 0:02:25.617 ********* 2025-07-12 13:46:12.805749 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:46:12.805758 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:46:12.805766 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:46:12.805775 | orchestrator | 2025-07-12 13:46:12.805784 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-07-12 13:46:12.805792 | orchestrator | Saturday 12 July 2025 13:44:46 +0000 (0:00:00.628) 0:02:26.246 ********* 2025-07-12 13:46:12.805801 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:46:12.805809 | orchestrator | 2025-07-12 13:46:12.805817 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-07-12 13:46:12.805825 | orchestrator | Saturday 12 July 2025 13:44:47 +0000 (0:00:00.573) 0:02:26.820 ********* 2025-07-12 13:46:12.805832 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:12.805845 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:12.805853 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:12.805860 | orchestrator | 2025-07-12 13:46:12.805868 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-07-12 13:46:12.805876 | orchestrator | Saturday 12 July 2025 13:44:47 +0000 (0:00:00.368) 0:02:27.188 ********* 2025-07-12 13:46:12.805914 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:12.805924 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:12.805932 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:12.805939 | orchestrator | 2025-07-12 13:46:12.805965 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-07-12 13:46:12.805973 | orchestrator | Saturday 12 July 2025 13:44:48 +0000 (0:00:00.659) 0:02:27.848 ********* 2025-07-12 13:46:12.805981 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:12.805989 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:12.805996 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:12.806004 | orchestrator | 2025-07-12 13:46:12.806011 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-07-12 13:46:12.806046 | orchestrator | Saturday 12 July 2025 13:44:48 +0000 (0:00:00.346) 0:02:28.194 ********* 2025-07-12 13:46:12.806055 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:46:12.806062 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:46:12.806070 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:46:12.806078 | orchestrator | 2025-07-12 13:46:12.806086 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-07-12 13:46:12.806093 | orchestrator | Saturday 12 July 2025 13:44:49 +0000 (0:00:00.806) 0:02:29.001 ********* 2025-07-12 13:46:12.806101 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:46:12.806109 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:46:12.806117 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:46:12.806124 | orchestrator | 2025-07-12 13:46:12.806132 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-07-12 13:46:12.806140 | orchestrator | Saturday 12 July 2025 13:44:50 +0000 (0:00:01.345) 0:02:30.346 ********* 2025-07-12 13:46:12.806147 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:46:12.806155 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:46:12.806163 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:46:12.806170 | orchestrator | 2025-07-12 13:46:12.806178 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-07-12 13:46:12.806186 | orchestrator | Saturday 12 July 2025 13:44:52 +0000 (0:00:01.725) 0:02:32.072 ********* 2025-07-12 13:46:12.806194 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:46:12.806201 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:46:12.806209 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:46:12.806217 | orchestrator | 2025-07-12 13:46:12.806230 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-07-12 13:46:12.806238 | orchestrator | 2025-07-12 13:46:12.806246 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-07-12 13:46:12.806254 | orchestrator | Saturday 12 July 2025 13:45:04 +0000 (0:00:12.349) 0:02:44.422 ********* 2025-07-12 13:46:12.806262 | orchestrator | ok: [testbed-manager] 2025-07-12 13:46:12.806269 | orchestrator | 2025-07-12 13:46:12.806277 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-07-12 13:46:12.806285 | orchestrator | Saturday 12 July 2025 13:45:05 +0000 (0:00:00.736) 0:02:45.158 ********* 2025-07-12 13:46:12.806292 | orchestrator | changed: [testbed-manager] 2025-07-12 13:46:12.806300 | orchestrator | 2025-07-12 13:46:12.806308 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-07-12 13:46:12.806315 | orchestrator | Saturday 12 July 2025 13:45:05 +0000 (0:00:00.421) 0:02:45.580 ********* 2025-07-12 13:46:12.806323 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-07-12 13:46:12.806331 | orchestrator | 2025-07-12 13:46:12.806338 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-07-12 13:46:12.806346 | orchestrator | Saturday 12 July 2025 13:45:07 +0000 (0:00:01.144) 0:02:46.724 ********* 2025-07-12 13:46:12.806360 | orchestrator | changed: [testbed-manager] 2025-07-12 13:46:12.806367 | orchestrator | 2025-07-12 13:46:12.806375 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-07-12 13:46:12.806383 | orchestrator | Saturday 12 July 2025 13:45:08 +0000 (0:00:00.947) 0:02:47.671 ********* 2025-07-12 13:46:12.806391 | orchestrator | changed: [testbed-manager] 2025-07-12 13:46:12.806398 | orchestrator | 2025-07-12 13:46:12.806406 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-07-12 13:46:12.806414 | orchestrator | Saturday 12 July 2025 13:45:08 +0000 (0:00:00.629) 0:02:48.300 ********* 2025-07-12 13:46:12.806421 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-12 13:46:12.806429 | orchestrator | 2025-07-12 13:46:12.806437 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-07-12 13:46:12.806445 | orchestrator | Saturday 12 July 2025 13:45:10 +0000 (0:00:01.908) 0:02:50.209 ********* 2025-07-12 13:46:12.806453 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-12 13:46:12.806460 | orchestrator | 2025-07-12 13:46:12.806468 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-07-12 13:46:12.806475 | orchestrator | Saturday 12 July 2025 13:45:11 +0000 (0:00:00.940) 0:02:51.150 ********* 2025-07-12 13:46:12.806483 | orchestrator | changed: [testbed-manager] 2025-07-12 13:46:12.806491 | orchestrator | 2025-07-12 13:46:12.806498 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-07-12 13:46:12.806506 | orchestrator | Saturday 12 July 2025 13:45:12 +0000 (0:00:00.725) 0:02:51.875 ********* 2025-07-12 13:46:12.806514 | orchestrator | changed: [testbed-manager] 2025-07-12 13:46:12.806522 | orchestrator | 2025-07-12 13:46:12.806530 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-07-12 13:46:12.806537 | orchestrator | 2025-07-12 13:46:12.806545 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-07-12 13:46:12.806553 | orchestrator | Saturday 12 July 2025 13:45:12 +0000 (0:00:00.508) 0:02:52.383 ********* 2025-07-12 13:46:12.806560 | orchestrator | ok: [testbed-manager] 2025-07-12 13:46:12.806568 | orchestrator | 2025-07-12 13:46:12.806576 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-07-12 13:46:12.806583 | orchestrator | Saturday 12 July 2025 13:45:12 +0000 (0:00:00.165) 0:02:52.549 ********* 2025-07-12 13:46:12.806591 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-07-12 13:46:12.806599 | orchestrator | 2025-07-12 13:46:12.806607 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-07-12 13:46:12.806614 | orchestrator | Saturday 12 July 2025 13:45:13 +0000 (0:00:00.431) 0:02:52.980 ********* 2025-07-12 13:46:12.806626 | orchestrator | ok: [testbed-manager] 2025-07-12 13:46:12.806634 | orchestrator | 2025-07-12 13:46:12.806642 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-07-12 13:46:12.806650 | orchestrator | Saturday 12 July 2025 13:45:14 +0000 (0:00:01.017) 0:02:53.998 ********* 2025-07-12 13:46:12.806658 | orchestrator | ok: [testbed-manager] 2025-07-12 13:46:12.806665 | orchestrator | 2025-07-12 13:46:12.806673 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-07-12 13:46:12.806681 | orchestrator | Saturday 12 July 2025 13:45:16 +0000 (0:00:01.687) 0:02:55.686 ********* 2025-07-12 13:46:12.806688 | orchestrator | changed: [testbed-manager] 2025-07-12 13:46:12.806696 | orchestrator | 2025-07-12 13:46:12.806704 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-07-12 13:46:12.806711 | orchestrator | Saturday 12 July 2025 13:45:16 +0000 (0:00:00.753) 0:02:56.440 ********* 2025-07-12 13:46:12.806719 | orchestrator | ok: [testbed-manager] 2025-07-12 13:46:12.806727 | orchestrator | 2025-07-12 13:46:12.806734 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-07-12 13:46:12.806742 | orchestrator | Saturday 12 July 2025 13:45:17 +0000 (0:00:00.421) 0:02:56.861 ********* 2025-07-12 13:46:12.806755 | orchestrator | changed: [testbed-manager] 2025-07-12 13:46:12.806763 | orchestrator | 2025-07-12 13:46:12.806771 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-07-12 13:46:12.806779 | orchestrator | Saturday 12 July 2025 13:45:24 +0000 (0:00:07.436) 0:03:04.298 ********* 2025-07-12 13:46:12.806786 | orchestrator | changed: [testbed-manager] 2025-07-12 13:46:12.806794 | orchestrator | 2025-07-12 13:46:12.806802 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-07-12 13:46:12.806809 | orchestrator | Saturday 12 July 2025 13:45:37 +0000 (0:00:12.681) 0:03:16.979 ********* 2025-07-12 13:46:12.806817 | orchestrator | ok: [testbed-manager] 2025-07-12 13:46:12.806825 | orchestrator | 2025-07-12 13:46:12.806832 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-07-12 13:46:12.806840 | orchestrator | 2025-07-12 13:46:12.806848 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-07-12 13:46:12.806860 | orchestrator | Saturday 12 July 2025 13:45:37 +0000 (0:00:00.586) 0:03:17.566 ********* 2025-07-12 13:46:12.806868 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:12.806876 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:12.806884 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:12.806891 | orchestrator | 2025-07-12 13:46:12.806899 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-07-12 13:46:12.806907 | orchestrator | Saturday 12 July 2025 13:45:38 +0000 (0:00:00.561) 0:03:18.128 ********* 2025-07-12 13:46:12.806914 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.806922 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:12.806930 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:12.806938 | orchestrator | 2025-07-12 13:46:12.806962 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-07-12 13:46:12.806971 | orchestrator | Saturday 12 July 2025 13:45:38 +0000 (0:00:00.357) 0:03:18.485 ********* 2025-07-12 13:46:12.806979 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:46:12.806987 | orchestrator | 2025-07-12 13:46:12.806994 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-07-12 13:46:12.807002 | orchestrator | Saturday 12 July 2025 13:45:39 +0000 (0:00:00.565) 0:03:19.050 ********* 2025-07-12 13:46:12.807010 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.807017 | orchestrator | 2025-07-12 13:46:12.807025 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-07-12 13:46:12.807033 | orchestrator | Saturday 12 July 2025 13:45:40 +0000 (0:00:00.687) 0:03:19.738 ********* 2025-07-12 13:46:12.807041 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.807048 | orchestrator | 2025-07-12 13:46:12.807056 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-07-12 13:46:12.807064 | orchestrator | Saturday 12 July 2025 13:45:40 +0000 (0:00:00.216) 0:03:19.954 ********* 2025-07-12 13:46:12.807071 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.807079 | orchestrator | 2025-07-12 13:46:12.807087 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-07-12 13:46:12.807094 | orchestrator | Saturday 12 July 2025 13:45:40 +0000 (0:00:00.269) 0:03:20.224 ********* 2025-07-12 13:46:12.807102 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.807110 | orchestrator | 2025-07-12 13:46:12.807118 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-07-12 13:46:12.807125 | orchestrator | Saturday 12 July 2025 13:45:40 +0000 (0:00:00.239) 0:03:20.464 ********* 2025-07-12 13:46:12.807133 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.807141 | orchestrator | 2025-07-12 13:46:12.807148 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-07-12 13:46:12.807156 | orchestrator | Saturday 12 July 2025 13:45:41 +0000 (0:00:00.213) 0:03:20.677 ********* 2025-07-12 13:46:12.807164 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.807172 | orchestrator | 2025-07-12 13:46:12.807179 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-07-12 13:46:12.807193 | orchestrator | Saturday 12 July 2025 13:45:41 +0000 (0:00:00.229) 0:03:20.907 ********* 2025-07-12 13:46:12.807200 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.807208 | orchestrator | 2025-07-12 13:46:12.807216 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-07-12 13:46:12.807224 | orchestrator | Saturday 12 July 2025 13:45:41 +0000 (0:00:00.255) 0:03:21.162 ********* 2025-07-12 13:46:12.807231 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.807239 | orchestrator | 2025-07-12 13:46:12.807247 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-07-12 13:46:12.807254 | orchestrator | Saturday 12 July 2025 13:45:41 +0000 (0:00:00.285) 0:03:21.447 ********* 2025-07-12 13:46:12.807262 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.807269 | orchestrator | 2025-07-12 13:46:12.807277 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-07-12 13:46:12.807289 | orchestrator | Saturday 12 July 2025 13:45:42 +0000 (0:00:00.250) 0:03:21.698 ********* 2025-07-12 13:46:12.807297 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-07-12 13:46:12.807305 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-07-12 13:46:12.807313 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.807320 | orchestrator | 2025-07-12 13:46:12.807328 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-07-12 13:46:12.807336 | orchestrator | Saturday 12 July 2025 13:45:42 +0000 (0:00:00.342) 0:03:22.041 ********* 2025-07-12 13:46:12.807343 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.807351 | orchestrator | 2025-07-12 13:46:12.807359 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-07-12 13:46:12.807366 | orchestrator | Saturday 12 July 2025 13:45:42 +0000 (0:00:00.201) 0:03:22.243 ********* 2025-07-12 13:46:12.807374 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.807382 | orchestrator | 2025-07-12 13:46:12.807389 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-07-12 13:46:12.807397 | orchestrator | Saturday 12 July 2025 13:45:42 +0000 (0:00:00.237) 0:03:22.480 ********* 2025-07-12 13:46:12.807405 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.807412 | orchestrator | 2025-07-12 13:46:12.807420 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-07-12 13:46:12.807428 | orchestrator | Saturday 12 July 2025 13:45:43 +0000 (0:00:00.819) 0:03:23.300 ********* 2025-07-12 13:46:12.807436 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.807443 | orchestrator | 2025-07-12 13:46:12.807451 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-07-12 13:46:12.807459 | orchestrator | Saturday 12 July 2025 13:45:43 +0000 (0:00:00.222) 0:03:23.523 ********* 2025-07-12 13:46:12.807466 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.807474 | orchestrator | 2025-07-12 13:46:12.807482 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-07-12 13:46:12.807489 | orchestrator | Saturday 12 July 2025 13:45:44 +0000 (0:00:00.200) 0:03:23.723 ********* 2025-07-12 13:46:12.807497 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.807505 | orchestrator | 2025-07-12 13:46:12.807512 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-07-12 13:46:12.807525 | orchestrator | Saturday 12 July 2025 13:45:44 +0000 (0:00:00.212) 0:03:23.936 ********* 2025-07-12 13:46:12.807533 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.807541 | orchestrator | 2025-07-12 13:46:12.807549 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-07-12 13:46:12.807556 | orchestrator | Saturday 12 July 2025 13:45:44 +0000 (0:00:00.224) 0:03:24.161 ********* 2025-07-12 13:46:12.807564 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.807572 | orchestrator | 2025-07-12 13:46:12.807579 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-07-12 13:46:12.807587 | orchestrator | Saturday 12 July 2025 13:45:44 +0000 (0:00:00.232) 0:03:24.393 ********* 2025-07-12 13:46:12.807603 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.807611 | orchestrator | 2025-07-12 13:46:12.807619 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-07-12 13:46:12.807627 | orchestrator | Saturday 12 July 2025 13:45:44 +0000 (0:00:00.203) 0:03:24.596 ********* 2025-07-12 13:46:12.807634 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.807642 | orchestrator | 2025-07-12 13:46:12.807650 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-07-12 13:46:12.807657 | orchestrator | Saturday 12 July 2025 13:45:45 +0000 (0:00:00.204) 0:03:24.801 ********* 2025-07-12 13:46:12.807665 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.807673 | orchestrator | 2025-07-12 13:46:12.807681 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-07-12 13:46:12.807688 | orchestrator | Saturday 12 July 2025 13:45:45 +0000 (0:00:00.188) 0:03:24.989 ********* 2025-07-12 13:46:12.807696 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-07-12 13:46:12.807704 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-07-12 13:46:12.807712 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-07-12 13:46:12.807719 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-07-12 13:46:12.807727 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.807735 | orchestrator | 2025-07-12 13:46:12.807742 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-07-12 13:46:12.807750 | orchestrator | Saturday 12 July 2025 13:45:45 +0000 (0:00:00.505) 0:03:25.494 ********* 2025-07-12 13:46:12.807758 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.807765 | orchestrator | 2025-07-12 13:46:12.807773 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-07-12 13:46:12.807781 | orchestrator | Saturday 12 July 2025 13:45:46 +0000 (0:00:00.175) 0:03:25.670 ********* 2025-07-12 13:46:12.807789 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.807796 | orchestrator | 2025-07-12 13:46:12.807804 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-07-12 13:46:12.807812 | orchestrator | Saturday 12 July 2025 13:45:46 +0000 (0:00:00.191) 0:03:25.861 ********* 2025-07-12 13:46:12.807819 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.807827 | orchestrator | 2025-07-12 13:46:12.807835 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-07-12 13:46:12.807843 | orchestrator | Saturday 12 July 2025 13:45:46 +0000 (0:00:00.194) 0:03:26.056 ********* 2025-07-12 13:46:12.807850 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.807858 | orchestrator | 2025-07-12 13:46:12.807866 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-07-12 13:46:12.807874 | orchestrator | Saturday 12 July 2025 13:45:46 +0000 (0:00:00.546) 0:03:26.602 ********* 2025-07-12 13:46:12.807881 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-07-12 13:46:12.807889 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-07-12 13:46:12.807897 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.807904 | orchestrator | 2025-07-12 13:46:12.807916 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-07-12 13:46:12.807924 | orchestrator | Saturday 12 July 2025 13:45:47 +0000 (0:00:00.293) 0:03:26.895 ********* 2025-07-12 13:46:12.807932 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.807939 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:12.807986 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:12.807995 | orchestrator | 2025-07-12 13:46:12.808003 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-07-12 13:46:12.808010 | orchestrator | Saturday 12 July 2025 13:45:47 +0000 (0:00:00.341) 0:03:27.237 ********* 2025-07-12 13:46:12.808018 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:12.808032 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:12.808040 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:12.808047 | orchestrator | 2025-07-12 13:46:12.808055 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-07-12 13:46:12.808063 | orchestrator | 2025-07-12 13:46:12.808070 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-07-12 13:46:12.808078 | orchestrator | Saturday 12 July 2025 13:45:48 +0000 (0:00:00.973) 0:03:28.210 ********* 2025-07-12 13:46:12.808086 | orchestrator | ok: [testbed-manager] 2025-07-12 13:46:12.808093 | orchestrator | 2025-07-12 13:46:12.808101 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-07-12 13:46:12.808109 | orchestrator | Saturday 12 July 2025 13:45:48 +0000 (0:00:00.242) 0:03:28.453 ********* 2025-07-12 13:46:12.808116 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-07-12 13:46:12.808124 | orchestrator | 2025-07-12 13:46:12.808132 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-07-12 13:46:12.808139 | orchestrator | Saturday 12 July 2025 13:45:49 +0000 (0:00:00.187) 0:03:28.640 ********* 2025-07-12 13:46:12.808147 | orchestrator | changed: [testbed-manager] 2025-07-12 13:46:12.808155 | orchestrator | 2025-07-12 13:46:12.808163 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-07-12 13:46:12.808170 | orchestrator | 2025-07-12 13:46:12.808178 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-07-12 13:46:12.808191 | orchestrator | Saturday 12 July 2025 13:45:54 +0000 (0:00:05.614) 0:03:34.255 ********* 2025-07-12 13:46:12.808199 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:46:12.808207 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:46:12.808215 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:46:12.808222 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:12.808230 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:12.808237 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:12.808245 | orchestrator | 2025-07-12 13:46:12.808253 | orchestrator | TASK [Manage labels] *********************************************************** 2025-07-12 13:46:12.808261 | orchestrator | Saturday 12 July 2025 13:45:55 +0000 (0:00:00.553) 0:03:34.808 ********* 2025-07-12 13:46:12.808269 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-07-12 13:46:12.808276 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-07-12 13:46:12.808284 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-07-12 13:46:12.808292 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-07-12 13:46:12.808299 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-07-12 13:46:12.808307 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-07-12 13:46:12.808315 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-07-12 13:46:12.808322 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-07-12 13:46:12.808330 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-07-12 13:46:12.808338 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-07-12 13:46:12.808346 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-07-12 13:46:12.808353 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-07-12 13:46:12.808361 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-07-12 13:46:12.808369 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-07-12 13:46:12.808376 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-07-12 13:46:12.808389 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-07-12 13:46:12.808397 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-07-12 13:46:12.808405 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-07-12 13:46:12.808413 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-07-12 13:46:12.808420 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-07-12 13:46:12.808428 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-07-12 13:46:12.808436 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-07-12 13:46:12.808443 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-07-12 13:46:12.808451 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-07-12 13:46:12.808459 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-07-12 13:46:12.808466 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-07-12 13:46:12.808474 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-07-12 13:46:12.808481 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-07-12 13:46:12.808489 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-07-12 13:46:12.808497 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-07-12 13:46:12.808503 | orchestrator | 2025-07-12 13:46:12.808510 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-07-12 13:46:12.808516 | orchestrator | Saturday 12 July 2025 13:46:08 +0000 (0:00:13.095) 0:03:47.904 ********* 2025-07-12 13:46:12.808523 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:12.808530 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:12.808536 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:12.808542 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.808549 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:12.808556 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:12.808562 | orchestrator | 2025-07-12 13:46:12.808569 | orchestrator | TASK [Manage taints] *********************************************************** 2025-07-12 13:46:12.808575 | orchestrator | Saturday 12 July 2025 13:46:08 +0000 (0:00:00.561) 0:03:48.465 ********* 2025-07-12 13:46:12.808582 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:12.808588 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:12.808595 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:12.808601 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:12.808608 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:12.808614 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:12.808621 | orchestrator | 2025-07-12 13:46:12.808627 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:46:12.808638 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:46:12.808645 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-07-12 13:46:12.808652 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-07-12 13:46:12.808659 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-07-12 13:46:12.808665 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-07-12 13:46:12.808676 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-07-12 13:46:12.808683 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-07-12 13:46:12.808689 | orchestrator | 2025-07-12 13:46:12.808696 | orchestrator | 2025-07-12 13:46:12.808703 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:46:12.808709 | orchestrator | Saturday 12 July 2025 13:46:09 +0000 (0:00:00.674) 0:03:49.139 ********* 2025-07-12 13:46:12.808716 | orchestrator | =============================================================================== 2025-07-12 13:46:12.808722 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.78s 2025-07-12 13:46:12.808729 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.23s 2025-07-12 13:46:12.808735 | orchestrator | Manage labels ---------------------------------------------------------- 13.10s 2025-07-12 13:46:12.808748 | orchestrator | kubectl : Install required packages ------------------------------------ 12.68s 2025-07-12 13:46:12.808754 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 12.35s 2025-07-12 13:46:12.808761 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.44s 2025-07-12 13:46:12.808767 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.34s 2025-07-12 13:46:12.808774 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.61s 2025-07-12 13:46:12.808781 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.23s 2025-07-12 13:46:12.808787 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 3.21s 2025-07-12 13:46:12.808794 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 2.79s 2025-07-12 13:46:12.808800 | orchestrator | k3s_server : Kill the temporary service used for initialization --------- 2.75s 2025-07-12 13:46:12.808807 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.29s 2025-07-12 13:46:12.808813 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.91s 2025-07-12 13:46:12.808820 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.86s 2025-07-12 13:46:12.808826 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.85s 2025-07-12 13:46:12.808836 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 1.77s 2025-07-12 13:46:12.808843 | orchestrator | k3s_agent : Configure the k3s service ----------------------------------- 1.73s 2025-07-12 13:46:12.808849 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 1.69s 2025-07-12 13:46:12.808855 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.69s 2025-07-12 13:46:12.808862 | orchestrator | 2025-07-12 13:46:12 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:46:12.808869 | orchestrator | 2025-07-12 13:46:12 | INFO  | Task 3a50e7db-9afc-4516-9052-28a6ec1d1407 is in state STARTED 2025-07-12 13:46:12.808876 | orchestrator | 2025-07-12 13:46:12 | INFO  | Task 2f5a327b-aeff-47e0-a30c-3a088fb4e042 is in state STARTED 2025-07-12 13:46:12.808882 | orchestrator | 2025-07-12 13:46:12 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:46:12.808889 | orchestrator | 2025-07-12 13:46:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:15.855296 | orchestrator | 2025-07-12 13:46:15 | INFO  | Task c93a338b-2385-4a27-b500-0b6e9a6c32e1 is in state STARTED 2025-07-12 13:46:15.855415 | orchestrator | 2025-07-12 13:46:15 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:46:15.856496 | orchestrator | 2025-07-12 13:46:15 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:46:15.857272 | orchestrator | 2025-07-12 13:46:15 | INFO  | Task 3a50e7db-9afc-4516-9052-28a6ec1d1407 is in state STARTED 2025-07-12 13:46:15.857900 | orchestrator | 2025-07-12 13:46:15 | INFO  | Task 2f5a327b-aeff-47e0-a30c-3a088fb4e042 is in state STARTED 2025-07-12 13:46:15.858884 | orchestrator | 2025-07-12 13:46:15 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:46:15.858911 | orchestrator | 2025-07-12 13:46:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:18.888705 | orchestrator | 2025-07-12 13:46:18 | INFO  | Task c93a338b-2385-4a27-b500-0b6e9a6c32e1 is in state STARTED 2025-07-12 13:46:18.892303 | orchestrator | 2025-07-12 13:46:18 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:46:18.894822 | orchestrator | 2025-07-12 13:46:18 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:46:18.895176 | orchestrator | 2025-07-12 13:46:18 | INFO  | Task 3a50e7db-9afc-4516-9052-28a6ec1d1407 is in state SUCCESS 2025-07-12 13:46:18.895895 | orchestrator | 2025-07-12 13:46:18 | INFO  | Task 2f5a327b-aeff-47e0-a30c-3a088fb4e042 is in state STARTED 2025-07-12 13:46:18.899323 | orchestrator | 2025-07-12 13:46:18 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:46:18.900400 | orchestrator | 2025-07-12 13:46:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:21.925806 | orchestrator | 2025-07-12 13:46:21 | INFO  | Task c93a338b-2385-4a27-b500-0b6e9a6c32e1 is in state SUCCESS 2025-07-12 13:46:21.925914 | orchestrator | 2025-07-12 13:46:21 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:46:21.927340 | orchestrator | 2025-07-12 13:46:21 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:46:21.928077 | orchestrator | 2025-07-12 13:46:21 | INFO  | Task 2f5a327b-aeff-47e0-a30c-3a088fb4e042 is in state STARTED 2025-07-12 13:46:21.928782 | orchestrator | 2025-07-12 13:46:21 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:46:21.928805 | orchestrator | 2025-07-12 13:46:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:24.970732 | orchestrator | 2025-07-12 13:46:24 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:46:24.972617 | orchestrator | 2025-07-12 13:46:24 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:46:24.973101 | orchestrator | 2025-07-12 13:46:24 | INFO  | Task 2f5a327b-aeff-47e0-a30c-3a088fb4e042 is in state STARTED 2025-07-12 13:46:24.974313 | orchestrator | 2025-07-12 13:46:24 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:46:24.974440 | orchestrator | 2025-07-12 13:46:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:28.012485 | orchestrator | 2025-07-12 13:46:28 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:46:28.012594 | orchestrator | 2025-07-12 13:46:28 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:46:28.014575 | orchestrator | 2025-07-12 13:46:28 | INFO  | Task 2f5a327b-aeff-47e0-a30c-3a088fb4e042 is in state STARTED 2025-07-12 13:46:28.014665 | orchestrator | 2025-07-12 13:46:28 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:46:28.014681 | orchestrator | 2025-07-12 13:46:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:31.048414 | orchestrator | 2025-07-12 13:46:31 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:46:31.048862 | orchestrator | 2025-07-12 13:46:31 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:46:31.054497 | orchestrator | 2025-07-12 13:46:31 | INFO  | Task 2f5a327b-aeff-47e0-a30c-3a088fb4e042 is in state STARTED 2025-07-12 13:46:31.055107 | orchestrator | 2025-07-12 13:46:31 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:46:31.055477 | orchestrator | 2025-07-12 13:46:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:34.100632 | orchestrator | 2025-07-12 13:46:34 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:46:34.102989 | orchestrator | 2025-07-12 13:46:34 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:46:34.105824 | orchestrator | 2025-07-12 13:46:34 | INFO  | Task 2f5a327b-aeff-47e0-a30c-3a088fb4e042 is in state STARTED 2025-07-12 13:46:34.106272 | orchestrator | 2025-07-12 13:46:34 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:46:34.106498 | orchestrator | 2025-07-12 13:46:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:37.137896 | orchestrator | 2025-07-12 13:46:37 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:46:37.139095 | orchestrator | 2025-07-12 13:46:37 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:46:37.141880 | orchestrator | 2025-07-12 13:46:37 | INFO  | Task 2f5a327b-aeff-47e0-a30c-3a088fb4e042 is in state SUCCESS 2025-07-12 13:46:37.144196 | orchestrator | 2025-07-12 13:46:37.144232 | orchestrator | 2025-07-12 13:46:37.144245 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-07-12 13:46:37.144257 | orchestrator | 2025-07-12 13:46:37.144268 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-07-12 13:46:37.144279 | orchestrator | Saturday 12 July 2025 13:46:13 +0000 (0:00:00.166) 0:00:00.166 ********* 2025-07-12 13:46:37.144291 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-07-12 13:46:37.144301 | orchestrator | 2025-07-12 13:46:37.144312 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-07-12 13:46:37.144323 | orchestrator | Saturday 12 July 2025 13:46:14 +0000 (0:00:00.766) 0:00:00.933 ********* 2025-07-12 13:46:37.144333 | orchestrator | changed: [testbed-manager] 2025-07-12 13:46:37.144345 | orchestrator | 2025-07-12 13:46:37.144356 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-07-12 13:46:37.144367 | orchestrator | Saturday 12 July 2025 13:46:15 +0000 (0:00:01.119) 0:00:02.052 ********* 2025-07-12 13:46:37.144377 | orchestrator | changed: [testbed-manager] 2025-07-12 13:46:37.144388 | orchestrator | 2025-07-12 13:46:37.144398 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:46:37.144410 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:46:37.144422 | orchestrator | 2025-07-12 13:46:37.144433 | orchestrator | 2025-07-12 13:46:37.144443 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:46:37.144454 | orchestrator | Saturday 12 July 2025 13:46:16 +0000 (0:00:00.373) 0:00:02.426 ********* 2025-07-12 13:46:37.144466 | orchestrator | =============================================================================== 2025-07-12 13:46:37.144476 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.12s 2025-07-12 13:46:37.144487 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.77s 2025-07-12 13:46:37.144498 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.37s 2025-07-12 13:46:37.144508 | orchestrator | 2025-07-12 13:46:37.144548 | orchestrator | 2025-07-12 13:46:37.144559 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-07-12 13:46:37.144569 | orchestrator | 2025-07-12 13:46:37.144580 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-07-12 13:46:37.144590 | orchestrator | Saturday 12 July 2025 13:46:13 +0000 (0:00:00.127) 0:00:00.127 ********* 2025-07-12 13:46:37.144601 | orchestrator | ok: [testbed-manager] 2025-07-12 13:46:37.144612 | orchestrator | 2025-07-12 13:46:37.144623 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-07-12 13:46:37.144633 | orchestrator | Saturday 12 July 2025 13:46:14 +0000 (0:00:00.548) 0:00:00.676 ********* 2025-07-12 13:46:37.144644 | orchestrator | ok: [testbed-manager] 2025-07-12 13:46:37.144654 | orchestrator | 2025-07-12 13:46:37.144665 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-07-12 13:46:37.144675 | orchestrator | Saturday 12 July 2025 13:46:15 +0000 (0:00:00.640) 0:00:01.316 ********* 2025-07-12 13:46:37.144686 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-07-12 13:46:37.144696 | orchestrator | 2025-07-12 13:46:37.144706 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-07-12 13:46:37.144717 | orchestrator | Saturday 12 July 2025 13:46:15 +0000 (0:00:00.699) 0:00:02.016 ********* 2025-07-12 13:46:37.144742 | orchestrator | changed: [testbed-manager] 2025-07-12 13:46:37.144753 | orchestrator | 2025-07-12 13:46:37.144764 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-07-12 13:46:37.144774 | orchestrator | Saturday 12 July 2025 13:46:16 +0000 (0:00:00.961) 0:00:02.977 ********* 2025-07-12 13:46:37.144784 | orchestrator | changed: [testbed-manager] 2025-07-12 13:46:37.144795 | orchestrator | 2025-07-12 13:46:37.144805 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-07-12 13:46:37.144817 | orchestrator | Saturday 12 July 2025 13:46:17 +0000 (0:00:00.667) 0:00:03.645 ********* 2025-07-12 13:46:37.144830 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-12 13:46:37.144842 | orchestrator | 2025-07-12 13:46:37.144854 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-07-12 13:46:37.144866 | orchestrator | Saturday 12 July 2025 13:46:18 +0000 (0:00:01.538) 0:00:05.183 ********* 2025-07-12 13:46:37.144878 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-12 13:46:37.144889 | orchestrator | 2025-07-12 13:46:37.144901 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-07-12 13:46:37.144933 | orchestrator | Saturday 12 July 2025 13:46:19 +0000 (0:00:00.837) 0:00:06.020 ********* 2025-07-12 13:46:37.144946 | orchestrator | ok: [testbed-manager] 2025-07-12 13:46:37.144957 | orchestrator | 2025-07-12 13:46:37.144969 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-07-12 13:46:37.144981 | orchestrator | Saturday 12 July 2025 13:46:20 +0000 (0:00:00.373) 0:00:06.393 ********* 2025-07-12 13:46:37.144992 | orchestrator | ok: [testbed-manager] 2025-07-12 13:46:37.145004 | orchestrator | 2025-07-12 13:46:37.145016 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:46:37.145028 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:46:37.145040 | orchestrator | 2025-07-12 13:46:37.145052 | orchestrator | 2025-07-12 13:46:37.145063 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:46:37.145075 | orchestrator | Saturday 12 July 2025 13:46:20 +0000 (0:00:00.271) 0:00:06.665 ********* 2025-07-12 13:46:37.145087 | orchestrator | =============================================================================== 2025-07-12 13:46:37.145099 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.54s 2025-07-12 13:46:37.145111 | orchestrator | Write kubeconfig file --------------------------------------------------- 0.96s 2025-07-12 13:46:37.145124 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.84s 2025-07-12 13:46:37.145148 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.70s 2025-07-12 13:46:37.145221 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.67s 2025-07-12 13:46:37.145233 | orchestrator | Create .kube directory -------------------------------------------------- 0.64s 2025-07-12 13:46:37.145244 | orchestrator | Get home directory of operator user ------------------------------------- 0.55s 2025-07-12 13:46:37.145254 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.37s 2025-07-12 13:46:37.145264 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.27s 2025-07-12 13:46:37.145275 | orchestrator | 2025-07-12 13:46:37.145286 | orchestrator | 2025-07-12 13:46:37.145296 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 13:46:37.145307 | orchestrator | 2025-07-12 13:46:37.145317 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 13:46:37.145328 | orchestrator | Saturday 12 July 2025 13:45:28 +0000 (0:00:00.436) 0:00:00.436 ********* 2025-07-12 13:46:37.145446 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:46:37.145459 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:46:37.145469 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:46:37.145480 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:37.145490 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:37.145501 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:37.145511 | orchestrator | 2025-07-12 13:46:37.145522 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 13:46:37.145533 | orchestrator | Saturday 12 July 2025 13:45:29 +0000 (0:00:00.915) 0:00:01.352 ********* 2025-07-12 13:46:37.145543 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-12 13:46:37.145554 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-12 13:46:37.145564 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-12 13:46:37.145575 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-12 13:46:37.145585 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-12 13:46:37.145596 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-12 13:46:37.145607 | orchestrator | 2025-07-12 13:46:37.145617 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-07-12 13:46:37.145628 | orchestrator | 2025-07-12 13:46:37.145639 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-07-12 13:46:37.145649 | orchestrator | Saturday 12 July 2025 13:45:30 +0000 (0:00:00.869) 0:00:02.221 ********* 2025-07-12 13:46:37.145661 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:46:37.145674 | orchestrator | 2025-07-12 13:46:37.145684 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-07-12 13:46:37.145695 | orchestrator | Saturday 12 July 2025 13:45:31 +0000 (0:00:01.488) 0:00:03.710 ********* 2025-07-12 13:46:37.145706 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-07-12 13:46:37.145717 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-07-12 13:46:37.145734 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-07-12 13:46:37.145745 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-07-12 13:46:37.145756 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-07-12 13:46:37.145766 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-07-12 13:46:37.145777 | orchestrator | 2025-07-12 13:46:37.145788 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-07-12 13:46:37.145798 | orchestrator | Saturday 12 July 2025 13:45:33 +0000 (0:00:02.009) 0:00:05.719 ********* 2025-07-12 13:46:37.145809 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-07-12 13:46:37.145819 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-07-12 13:46:37.145838 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-07-12 13:46:37.145848 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-07-12 13:46:37.145859 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-07-12 13:46:37.145869 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-07-12 13:46:37.145880 | orchestrator | 2025-07-12 13:46:37.145891 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-07-12 13:46:37.145901 | orchestrator | Saturday 12 July 2025 13:45:36 +0000 (0:00:02.454) 0:00:08.174 ********* 2025-07-12 13:46:37.145930 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-07-12 13:46:37.145941 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:37.145952 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-07-12 13:46:37.145962 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:37.145972 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-07-12 13:46:37.145983 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:37.145993 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-07-12 13:46:37.146004 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-07-12 13:46:37.146014 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:37.146075 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:37.146087 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-07-12 13:46:37.146100 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:37.146111 | orchestrator | 2025-07-12 13:46:37.146123 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-07-12 13:46:37.146135 | orchestrator | Saturday 12 July 2025 13:45:38 +0000 (0:00:01.726) 0:00:09.900 ********* 2025-07-12 13:46:37.146147 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:37.146159 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:37.146170 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:37.146192 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:37.146204 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:37.146216 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:37.146228 | orchestrator | 2025-07-12 13:46:37.146240 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-07-12 13:46:37.146251 | orchestrator | Saturday 12 July 2025 13:45:39 +0000 (0:00:01.181) 0:00:11.082 ********* 2025-07-12 13:46:37.146266 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:37.146284 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:37.146303 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:37.146325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:37.146337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:37.146359 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:37.146373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:37.146386 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:37.146411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:37.146424 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:37.146436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:37.146455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:37.146467 | orchestrator | 2025-07-12 13:46:37.146477 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-07-12 13:46:37.146488 | orchestrator | Saturday 12 July 2025 13:45:41 +0000 (0:00:02.152) 0:00:13.234 ********* 2025-07-12 13:46:37.146500 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:37.146518 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:37.146534 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:37.146545 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:37.146610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:37.146624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:37.146635 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:37.146660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:37.146671 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:37.146682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:37.146701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:37.146713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:37.146730 | orchestrator | 2025-07-12 13:46:37.146742 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-07-12 13:46:37.146752 | orchestrator | Saturday 12 July 2025 13:45:46 +0000 (0:00:04.628) 0:00:17.863 ********* 2025-07-12 13:46:37.146763 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:37.146774 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:37.146785 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:37.146795 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:37.146806 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:37.146816 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:37.146827 | orchestrator | 2025-07-12 13:46:37.146838 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-07-12 13:46:37.146848 | orchestrator | Saturday 12 July 2025 13:45:47 +0000 (0:00:00.930) 0:00:18.793 ********* 2025-07-12 13:46:37.146870 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:37.146882 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:37.146893 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:37.146943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:37.146963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:37.146975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:37.146990 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:37.147002 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:37.147019 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:37.147031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:37.147048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:37.147060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:37.147070 | orchestrator | 2025-07-12 13:46:37.147086 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-12 13:46:37.147098 | orchestrator | Saturday 12 July 2025 13:45:49 +0000 (0:00:02.804) 0:00:21.598 ********* 2025-07-12 13:46:37.147108 | orchestrator | 2025-07-12 13:46:37.147119 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-12 13:46:37.147130 | orchestrator | Saturday 12 July 2025 13:45:50 +0000 (0:00:00.166) 0:00:21.764 ********* 2025-07-12 13:46:37.147140 | orchestrator | 2025-07-12 13:46:37.147151 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-12 13:46:37.147162 | orchestrator | Saturday 12 July 2025 13:45:50 +0000 (0:00:00.168) 0:00:21.933 ********* 2025-07-12 13:46:37.147172 | orchestrator | 2025-07-12 13:46:37.147183 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-12 13:46:37.147194 | orchestrator | Saturday 12 July 2025 13:45:50 +0000 (0:00:00.127) 0:00:22.060 ********* 2025-07-12 13:46:37.147204 | orchestrator | 2025-07-12 13:46:37.147215 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-12 13:46:37.147225 | orchestrator | Saturday 12 July 2025 13:45:50 +0000 (0:00:00.172) 0:00:22.233 ********* 2025-07-12 13:46:37.147236 | orchestrator | 2025-07-12 13:46:37.147246 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-12 13:46:37.147257 | orchestrator | Saturday 12 July 2025 13:45:50 +0000 (0:00:00.143) 0:00:22.377 ********* 2025-07-12 13:46:37.147267 | orchestrator | 2025-07-12 13:46:37.147278 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-07-12 13:46:37.147289 | orchestrator | Saturday 12 July 2025 13:45:50 +0000 (0:00:00.274) 0:00:22.651 ********* 2025-07-12 13:46:37.147299 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:46:37.147310 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:46:37.147320 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:46:37.147331 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:37.147341 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:37.147352 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:37.147362 | orchestrator | 2025-07-12 13:46:37.147373 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-07-12 13:46:37.147389 | orchestrator | Saturday 12 July 2025 13:46:02 +0000 (0:00:11.483) 0:00:34.135 ********* 2025-07-12 13:46:37.147400 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:46:37.147411 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:46:37.147421 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:46:37.147432 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:37.147442 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:37.147453 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:37.147463 | orchestrator | 2025-07-12 13:46:37.147474 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-07-12 13:46:37.147490 | orchestrator | Saturday 12 July 2025 13:46:04 +0000 (0:00:02.305) 0:00:36.441 ********* 2025-07-12 13:46:37.147501 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:37.147512 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:46:37.147522 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:46:37.147533 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:37.147543 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:46:37.147554 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:37.147565 | orchestrator | 2025-07-12 13:46:37.147575 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-07-12 13:46:37.147586 | orchestrator | Saturday 12 July 2025 13:46:14 +0000 (0:00:10.045) 0:00:46.486 ********* 2025-07-12 13:46:37.147596 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-07-12 13:46:37.147607 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-07-12 13:46:37.147618 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-07-12 13:46:37.147628 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-07-12 13:46:37.147639 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-07-12 13:46:37.147649 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-07-12 13:46:37.147660 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-07-12 13:46:37.147670 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-07-12 13:46:37.147681 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-07-12 13:46:37.147691 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-07-12 13:46:37.147702 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-07-12 13:46:37.147712 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-07-12 13:46:37.147723 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-12 13:46:37.147733 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-12 13:46:37.147744 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-12 13:46:37.147769 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-12 13:46:37.147780 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-12 13:46:37.147791 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-12 13:46:37.147821 | orchestrator | 2025-07-12 13:46:37.147832 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-07-12 13:46:37.147843 | orchestrator | Saturday 12 July 2025 13:46:21 +0000 (0:00:07.127) 0:00:53.613 ********* 2025-07-12 13:46:37.147854 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-07-12 13:46:37.147865 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:37.147875 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-07-12 13:46:37.147886 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:37.147896 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-07-12 13:46:37.147921 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:37.147932 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-07-12 13:46:37.147943 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-07-12 13:46:37.147953 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-07-12 13:46:37.147964 | orchestrator | 2025-07-12 13:46:37.147975 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-07-12 13:46:37.147985 | orchestrator | Saturday 12 July 2025 13:46:24 +0000 (0:00:02.307) 0:00:55.921 ********* 2025-07-12 13:46:37.147996 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-07-12 13:46:37.148006 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:37.148017 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-07-12 13:46:37.148028 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:37.148038 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-07-12 13:46:37.148049 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:37.148059 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-07-12 13:46:37.148070 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-07-12 13:46:37.148080 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-07-12 13:46:37.148091 | orchestrator | 2025-07-12 13:46:37.148101 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-07-12 13:46:37.148112 | orchestrator | Saturday 12 July 2025 13:46:27 +0000 (0:00:03.832) 0:00:59.754 ********* 2025-07-12 13:46:37.148123 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:46:37.148133 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:46:37.148149 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:37.148160 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:46:37.148171 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:37.148181 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:37.148192 | orchestrator | 2025-07-12 13:46:37.148202 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:46:37.148213 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 13:46:37.148224 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 13:46:37.148235 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 13:46:37.148246 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 13:46:37.148256 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 13:46:37.148267 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 13:46:37.148277 | orchestrator | 2025-07-12 13:46:37.148288 | orchestrator | 2025-07-12 13:46:37.148299 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:46:37.148330 | orchestrator | Saturday 12 July 2025 13:46:35 +0000 (0:00:07.908) 0:01:07.663 ********* 2025-07-12 13:46:37.148342 | orchestrator | =============================================================================== 2025-07-12 13:46:37.148352 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.95s 2025-07-12 13:46:37.148363 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.48s 2025-07-12 13:46:37.148373 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.13s 2025-07-12 13:46:37.153401 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.63s 2025-07-12 13:46:37.153453 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.83s 2025-07-12 13:46:37.153463 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.80s 2025-07-12 13:46:37.153473 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.45s 2025-07-12 13:46:37.153482 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.31s 2025-07-12 13:46:37.153496 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.31s 2025-07-12 13:46:37.153506 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.15s 2025-07-12 13:46:37.153515 | orchestrator | module-load : Load modules ---------------------------------------------- 2.01s 2025-07-12 13:46:37.153525 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.73s 2025-07-12 13:46:37.153534 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.49s 2025-07-12 13:46:37.153543 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.18s 2025-07-12 13:46:37.153552 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.05s 2025-07-12 13:46:37.153562 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.93s 2025-07-12 13:46:37.153571 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.92s 2025-07-12 13:46:37.153580 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.87s 2025-07-12 13:46:37.153589 | orchestrator | 2025-07-12 13:46:37 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:46:37.153628 | orchestrator | 2025-07-12 13:46:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:40.189976 | orchestrator | 2025-07-12 13:46:40 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:46:40.192187 | orchestrator | 2025-07-12 13:46:40 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:46:40.194832 | orchestrator | 2025-07-12 13:46:40 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:46:40.196648 | orchestrator | 2025-07-12 13:46:40 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:46:40.197109 | orchestrator | 2025-07-12 13:46:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:43.238880 | orchestrator | 2025-07-12 13:46:43 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:46:43.240999 | orchestrator | 2025-07-12 13:46:43 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:46:43.244878 | orchestrator | 2025-07-12 13:46:43 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:46:43.246407 | orchestrator | 2025-07-12 13:46:43 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:46:43.246433 | orchestrator | 2025-07-12 13:46:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:46.278419 | orchestrator | 2025-07-12 13:46:46 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:46:46.278708 | orchestrator | 2025-07-12 13:46:46 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:46:46.279413 | orchestrator | 2025-07-12 13:46:46 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:46:46.280085 | orchestrator | 2025-07-12 13:46:46 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:46:46.280113 | orchestrator | 2025-07-12 13:46:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:49.304756 | orchestrator | 2025-07-12 13:46:49 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:46:49.307667 | orchestrator | 2025-07-12 13:46:49 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:46:49.307754 | orchestrator | 2025-07-12 13:46:49 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:46:49.307769 | orchestrator | 2025-07-12 13:46:49 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:46:49.307781 | orchestrator | 2025-07-12 13:46:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:52.345355 | orchestrator | 2025-07-12 13:46:52 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:46:52.346213 | orchestrator | 2025-07-12 13:46:52 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:46:52.347499 | orchestrator | 2025-07-12 13:46:52 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:46:52.348873 | orchestrator | 2025-07-12 13:46:52 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:46:52.348948 | orchestrator | 2025-07-12 13:46:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:55.392571 | orchestrator | 2025-07-12 13:46:55 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:46:55.393068 | orchestrator | 2025-07-12 13:46:55 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:46:55.394106 | orchestrator | 2025-07-12 13:46:55 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:46:55.395248 | orchestrator | 2025-07-12 13:46:55 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:46:55.395342 | orchestrator | 2025-07-12 13:46:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:58.431239 | orchestrator | 2025-07-12 13:46:58 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:46:58.431346 | orchestrator | 2025-07-12 13:46:58 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:46:58.432198 | orchestrator | 2025-07-12 13:46:58 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:46:58.433369 | orchestrator | 2025-07-12 13:46:58 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:46:58.433467 | orchestrator | 2025-07-12 13:46:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:01.480344 | orchestrator | 2025-07-12 13:47:01 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:47:01.481834 | orchestrator | 2025-07-12 13:47:01 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:47:01.483490 | orchestrator | 2025-07-12 13:47:01 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:47:01.484807 | orchestrator | 2025-07-12 13:47:01 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:47:01.485456 | orchestrator | 2025-07-12 13:47:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:04.523445 | orchestrator | 2025-07-12 13:47:04 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:47:04.523557 | orchestrator | 2025-07-12 13:47:04 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:47:04.524963 | orchestrator | 2025-07-12 13:47:04 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:47:04.527521 | orchestrator | 2025-07-12 13:47:04 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:47:04.527553 | orchestrator | 2025-07-12 13:47:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:07.566616 | orchestrator | 2025-07-12 13:47:07 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:47:07.567101 | orchestrator | 2025-07-12 13:47:07 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:47:07.570671 | orchestrator | 2025-07-12 13:47:07 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:47:07.572323 | orchestrator | 2025-07-12 13:47:07 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:47:07.572349 | orchestrator | 2025-07-12 13:47:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:10.619818 | orchestrator | 2025-07-12 13:47:10 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:47:10.620502 | orchestrator | 2025-07-12 13:47:10 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:47:10.621439 | orchestrator | 2025-07-12 13:47:10 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:47:10.622653 | orchestrator | 2025-07-12 13:47:10 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:47:10.622681 | orchestrator | 2025-07-12 13:47:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:13.662744 | orchestrator | 2025-07-12 13:47:13 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:47:13.663954 | orchestrator | 2025-07-12 13:47:13 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:47:13.666178 | orchestrator | 2025-07-12 13:47:13 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:47:13.669376 | orchestrator | 2025-07-12 13:47:13 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:47:13.669490 | orchestrator | 2025-07-12 13:47:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:16.711201 | orchestrator | 2025-07-12 13:47:16 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:47:16.712049 | orchestrator | 2025-07-12 13:47:16 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:47:16.712455 | orchestrator | 2025-07-12 13:47:16 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:47:16.714176 | orchestrator | 2025-07-12 13:47:16 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:47:16.716547 | orchestrator | 2025-07-12 13:47:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:19.758449 | orchestrator | 2025-07-12 13:47:19 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:47:19.758565 | orchestrator | 2025-07-12 13:47:19 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:47:19.760398 | orchestrator | 2025-07-12 13:47:19 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:47:19.762938 | orchestrator | 2025-07-12 13:47:19 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:47:19.762980 | orchestrator | 2025-07-12 13:47:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:22.812687 | orchestrator | 2025-07-12 13:47:22 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:47:22.815040 | orchestrator | 2025-07-12 13:47:22 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:47:22.818416 | orchestrator | 2025-07-12 13:47:22 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:47:22.820906 | orchestrator | 2025-07-12 13:47:22 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:47:22.821544 | orchestrator | 2025-07-12 13:47:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:25.872359 | orchestrator | 2025-07-12 13:47:25 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:47:25.873517 | orchestrator | 2025-07-12 13:47:25 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:47:25.874518 | orchestrator | 2025-07-12 13:47:25 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:47:25.875550 | orchestrator | 2025-07-12 13:47:25 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:47:25.875574 | orchestrator | 2025-07-12 13:47:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:28.908299 | orchestrator | 2025-07-12 13:47:28 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:47:28.909459 | orchestrator | 2025-07-12 13:47:28 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:47:28.911171 | orchestrator | 2025-07-12 13:47:28 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:47:28.912032 | orchestrator | 2025-07-12 13:47:28 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:47:28.912055 | orchestrator | 2025-07-12 13:47:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:31.944198 | orchestrator | 2025-07-12 13:47:31 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:47:31.947559 | orchestrator | 2025-07-12 13:47:31 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:47:31.951889 | orchestrator | 2025-07-12 13:47:31 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:47:31.954158 | orchestrator | 2025-07-12 13:47:31 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:47:31.954433 | orchestrator | 2025-07-12 13:47:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:35.017220 | orchestrator | 2025-07-12 13:47:35 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:47:35.018356 | orchestrator | 2025-07-12 13:47:35 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:47:35.019890 | orchestrator | 2025-07-12 13:47:35 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:47:35.022555 | orchestrator | 2025-07-12 13:47:35 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:47:35.022878 | orchestrator | 2025-07-12 13:47:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:38.067724 | orchestrator | 2025-07-12 13:47:38 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:47:38.070497 | orchestrator | 2025-07-12 13:47:38 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:47:38.071627 | orchestrator | 2025-07-12 13:47:38 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:47:38.073786 | orchestrator | 2025-07-12 13:47:38 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:47:38.073987 | orchestrator | 2025-07-12 13:47:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:41.118340 | orchestrator | 2025-07-12 13:47:41 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:47:41.123667 | orchestrator | 2025-07-12 13:47:41 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:47:41.125954 | orchestrator | 2025-07-12 13:47:41 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:47:41.129199 | orchestrator | 2025-07-12 13:47:41 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:47:41.129239 | orchestrator | 2025-07-12 13:47:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:44.175889 | orchestrator | 2025-07-12 13:47:44 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:47:44.178942 | orchestrator | 2025-07-12 13:47:44 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:47:44.181222 | orchestrator | 2025-07-12 13:47:44 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:47:44.183198 | orchestrator | 2025-07-12 13:47:44 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:47:44.184211 | orchestrator | 2025-07-12 13:47:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:47.235604 | orchestrator | 2025-07-12 13:47:47 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:47:47.238313 | orchestrator | 2025-07-12 13:47:47 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:47:47.240156 | orchestrator | 2025-07-12 13:47:47 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:47:47.242512 | orchestrator | 2025-07-12 13:47:47 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:47:47.242530 | orchestrator | 2025-07-12 13:47:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:50.287246 | orchestrator | 2025-07-12 13:47:50 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:47:50.288287 | orchestrator | 2025-07-12 13:47:50 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:47:50.289782 | orchestrator | 2025-07-12 13:47:50 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:47:50.291545 | orchestrator | 2025-07-12 13:47:50 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:47:50.291584 | orchestrator | 2025-07-12 13:47:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:53.334842 | orchestrator | 2025-07-12 13:47:53 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:47:53.336009 | orchestrator | 2025-07-12 13:47:53 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:47:53.338146 | orchestrator | 2025-07-12 13:47:53 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:47:53.339687 | orchestrator | 2025-07-12 13:47:53 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:47:53.339718 | orchestrator | 2025-07-12 13:47:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:56.391276 | orchestrator | 2025-07-12 13:47:56 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:47:56.392232 | orchestrator | 2025-07-12 13:47:56 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:47:56.393735 | orchestrator | 2025-07-12 13:47:56 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:47:56.395193 | orchestrator | 2025-07-12 13:47:56 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:47:56.395219 | orchestrator | 2025-07-12 13:47:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:59.429829 | orchestrator | 2025-07-12 13:47:59 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:47:59.430375 | orchestrator | 2025-07-12 13:47:59 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:47:59.431322 | orchestrator | 2025-07-12 13:47:59 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:47:59.432439 | orchestrator | 2025-07-12 13:47:59 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:47:59.432488 | orchestrator | 2025-07-12 13:47:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:02.477611 | orchestrator | 2025-07-12 13:48:02 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:48:02.478504 | orchestrator | 2025-07-12 13:48:02 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:48:02.479443 | orchestrator | 2025-07-12 13:48:02 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:48:02.480424 | orchestrator | 2025-07-12 13:48:02 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:48:02.480446 | orchestrator | 2025-07-12 13:48:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:05.514239 | orchestrator | 2025-07-12 13:48:05 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:48:05.514587 | orchestrator | 2025-07-12 13:48:05 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:48:05.515449 | orchestrator | 2025-07-12 13:48:05 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:48:05.516035 | orchestrator | 2025-07-12 13:48:05 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:48:05.516059 | orchestrator | 2025-07-12 13:48:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:08.551885 | orchestrator | 2025-07-12 13:48:08 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state STARTED 2025-07-12 13:48:08.552843 | orchestrator | 2025-07-12 13:48:08 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:48:08.553866 | orchestrator | 2025-07-12 13:48:08 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:48:08.555235 | orchestrator | 2025-07-12 13:48:08 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:48:08.555261 | orchestrator | 2025-07-12 13:48:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:11.601834 | orchestrator | 2025-07-12 13:48:11 | INFO  | Task b4df9d47-288e-4475-a750-9dffc0e5dcee is in state SUCCESS 2025-07-12 13:48:11.602730 | orchestrator | 2025-07-12 13:48:11.602853 | orchestrator | 2025-07-12 13:48:11.602870 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-07-12 13:48:11.602883 | orchestrator | 2025-07-12 13:48:11.602894 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-07-12 13:48:11.602906 | orchestrator | Saturday 12 July 2025 13:45:49 +0000 (0:00:00.319) 0:00:00.320 ********* 2025-07-12 13:48:11.602944 | orchestrator | ok: [localhost] => { 2025-07-12 13:48:11.602958 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-07-12 13:48:11.602969 | orchestrator | } 2025-07-12 13:48:11.602980 | orchestrator | 2025-07-12 13:48:11.602991 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-07-12 13:48:11.603002 | orchestrator | Saturday 12 July 2025 13:45:49 +0000 (0:00:00.092) 0:00:00.412 ********* 2025-07-12 13:48:11.603013 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-07-12 13:48:11.603026 | orchestrator | ...ignoring 2025-07-12 13:48:11.603036 | orchestrator | 2025-07-12 13:48:11.603047 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-07-12 13:48:11.603058 | orchestrator | Saturday 12 July 2025 13:45:52 +0000 (0:00:02.997) 0:00:03.410 ********* 2025-07-12 13:48:11.603068 | orchestrator | skipping: [localhost] 2025-07-12 13:48:11.603079 | orchestrator | 2025-07-12 13:48:11.603089 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-07-12 13:48:11.603100 | orchestrator | Saturday 12 July 2025 13:45:52 +0000 (0:00:00.088) 0:00:03.498 ********* 2025-07-12 13:48:11.603111 | orchestrator | ok: [localhost] 2025-07-12 13:48:11.603121 | orchestrator | 2025-07-12 13:48:11.603132 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 13:48:11.603142 | orchestrator | 2025-07-12 13:48:11.603153 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 13:48:11.603164 | orchestrator | Saturday 12 July 2025 13:45:52 +0000 (0:00:00.409) 0:00:03.908 ********* 2025-07-12 13:48:11.603174 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:48:11.603185 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:48:11.603195 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:48:11.603205 | orchestrator | 2025-07-12 13:48:11.603216 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 13:48:11.603227 | orchestrator | Saturday 12 July 2025 13:45:53 +0000 (0:00:00.390) 0:00:04.299 ********* 2025-07-12 13:48:11.603237 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-07-12 13:48:11.603249 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-07-12 13:48:11.603259 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-07-12 13:48:11.603270 | orchestrator | 2025-07-12 13:48:11.603280 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-07-12 13:48:11.603293 | orchestrator | 2025-07-12 13:48:11.603306 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-07-12 13:48:11.603333 | orchestrator | Saturday 12 July 2025 13:45:54 +0000 (0:00:01.194) 0:00:05.494 ********* 2025-07-12 13:48:11.603346 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:48:11.603357 | orchestrator | 2025-07-12 13:48:11.603370 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-07-12 13:48:11.603382 | orchestrator | Saturday 12 July 2025 13:45:55 +0000 (0:00:00.704) 0:00:06.198 ********* 2025-07-12 13:48:11.603394 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:48:11.603406 | orchestrator | 2025-07-12 13:48:11.603417 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-07-12 13:48:11.603429 | orchestrator | Saturday 12 July 2025 13:45:56 +0000 (0:00:01.191) 0:00:07.389 ********* 2025-07-12 13:48:11.603441 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:48:11.603454 | orchestrator | 2025-07-12 13:48:11.603466 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-07-12 13:48:11.603478 | orchestrator | Saturday 12 July 2025 13:45:56 +0000 (0:00:00.502) 0:00:07.892 ********* 2025-07-12 13:48:11.603490 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:48:11.603502 | orchestrator | 2025-07-12 13:48:11.603514 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-07-12 13:48:11.603534 | orchestrator | Saturday 12 July 2025 13:45:57 +0000 (0:00:00.778) 0:00:08.671 ********* 2025-07-12 13:48:11.603545 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:48:11.603558 | orchestrator | 2025-07-12 13:48:11.603570 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-07-12 13:48:11.603581 | orchestrator | Saturday 12 July 2025 13:45:58 +0000 (0:00:00.615) 0:00:09.287 ********* 2025-07-12 13:48:11.603593 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:48:11.603605 | orchestrator | 2025-07-12 13:48:11.603618 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-07-12 13:48:11.603630 | orchestrator | Saturday 12 July 2025 13:45:58 +0000 (0:00:00.660) 0:00:09.947 ********* 2025-07-12 13:48:11.603642 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:48:11.603652 | orchestrator | 2025-07-12 13:48:11.603663 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-07-12 13:48:11.603674 | orchestrator | Saturday 12 July 2025 13:46:00 +0000 (0:00:01.085) 0:00:11.033 ********* 2025-07-12 13:48:11.603684 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:48:11.603695 | orchestrator | 2025-07-12 13:48:11.603705 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-07-12 13:48:11.603716 | orchestrator | Saturday 12 July 2025 13:46:00 +0000 (0:00:00.947) 0:00:11.980 ********* 2025-07-12 13:48:11.603726 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:48:11.603737 | orchestrator | 2025-07-12 13:48:11.603747 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-07-12 13:48:11.603758 | orchestrator | Saturday 12 July 2025 13:46:01 +0000 (0:00:00.392) 0:00:12.373 ********* 2025-07-12 13:48:11.603769 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:48:11.603796 | orchestrator | 2025-07-12 13:48:11.603823 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-07-12 13:48:11.603834 | orchestrator | Saturday 12 July 2025 13:46:01 +0000 (0:00:00.316) 0:00:12.690 ********* 2025-07-12 13:48:11.603850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 13:48:11.603872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 13:48:11.603893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 13:48:11.603905 | orchestrator | 2025-07-12 13:48:11.603916 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-07-12 13:48:11.603927 | orchestrator | Saturday 12 July 2025 13:46:03 +0000 (0:00:01.440) 0:00:14.131 ********* 2025-07-12 13:48:11.603948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 13:48:11.603961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 13:48:11.603978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 13:48:11.603997 | orchestrator | 2025-07-12 13:48:11.604008 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-07-12 13:48:11.604019 | orchestrator | Saturday 12 July 2025 13:46:05 +0000 (0:00:02.427) 0:00:16.559 ********* 2025-07-12 13:48:11.604030 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-07-12 13:48:11.604041 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-07-12 13:48:11.604051 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-07-12 13:48:11.604062 | orchestrator | 2025-07-12 13:48:11.604073 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-07-12 13:48:11.604083 | orchestrator | Saturday 12 July 2025 13:46:08 +0000 (0:00:03.261) 0:00:19.820 ********* 2025-07-12 13:48:11.604094 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-07-12 13:48:11.604104 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-07-12 13:48:11.604115 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-07-12 13:48:11.604125 | orchestrator | 2025-07-12 13:48:11.604136 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-07-12 13:48:11.604146 | orchestrator | Saturday 12 July 2025 13:46:11 +0000 (0:00:02.936) 0:00:22.757 ********* 2025-07-12 13:48:11.604157 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-07-12 13:48:11.604167 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-07-12 13:48:11.604178 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-07-12 13:48:11.604188 | orchestrator | 2025-07-12 13:48:11.604205 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-07-12 13:48:11.604216 | orchestrator | Saturday 12 July 2025 13:46:14 +0000 (0:00:02.486) 0:00:25.244 ********* 2025-07-12 13:48:11.604227 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-07-12 13:48:11.604238 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-07-12 13:48:11.604249 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-07-12 13:48:11.604259 | orchestrator | 2025-07-12 13:48:11.604270 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-07-12 13:48:11.604281 | orchestrator | Saturday 12 July 2025 13:46:16 +0000 (0:00:02.194) 0:00:27.438 ********* 2025-07-12 13:48:11.604291 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-07-12 13:48:11.604302 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-07-12 13:48:11.604312 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-07-12 13:48:11.604323 | orchestrator | 2025-07-12 13:48:11.604333 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-07-12 13:48:11.604350 | orchestrator | Saturday 12 July 2025 13:46:18 +0000 (0:00:01.820) 0:00:29.259 ********* 2025-07-12 13:48:11.604361 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-07-12 13:48:11.604372 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-07-12 13:48:11.604382 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-07-12 13:48:11.604393 | orchestrator | 2025-07-12 13:48:11.604404 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-07-12 13:48:11.604414 | orchestrator | Saturday 12 July 2025 13:46:20 +0000 (0:00:01.903) 0:00:31.162 ********* 2025-07-12 13:48:11.604425 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:48:11.604435 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:48:11.604446 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:48:11.604456 | orchestrator | 2025-07-12 13:48:11.604467 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-07-12 13:48:11.604478 | orchestrator | Saturday 12 July 2025 13:46:20 +0000 (0:00:00.621) 0:00:31.784 ********* 2025-07-12 13:48:11.604494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 13:48:11.604507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 13:48:11.604529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 13:48:11.604552 | orchestrator | 2025-07-12 13:48:11.604562 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-07-12 13:48:11.604573 | orchestrator | Saturday 12 July 2025 13:46:22 +0000 (0:00:01.541) 0:00:33.325 ********* 2025-07-12 13:48:11.604583 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:48:11.604594 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:48:11.604604 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:48:11.604614 | orchestrator | 2025-07-12 13:48:11.604625 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-07-12 13:48:11.604635 | orchestrator | Saturday 12 July 2025 13:46:23 +0000 (0:00:00.892) 0:00:34.217 ********* 2025-07-12 13:48:11.604646 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:48:11.604656 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:48:11.604666 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:48:11.604677 | orchestrator | 2025-07-12 13:48:11.604688 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-07-12 13:48:11.604698 | orchestrator | Saturday 12 July 2025 13:46:30 +0000 (0:00:07.356) 0:00:41.574 ********* 2025-07-12 13:48:11.604708 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:48:11.604719 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:48:11.604729 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:48:11.604739 | orchestrator | 2025-07-12 13:48:11.604750 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-07-12 13:48:11.604760 | orchestrator | 2025-07-12 13:48:11.604787 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-07-12 13:48:11.604798 | orchestrator | Saturday 12 July 2025 13:46:30 +0000 (0:00:00.359) 0:00:41.933 ********* 2025-07-12 13:48:11.604809 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:48:11.604819 | orchestrator | 2025-07-12 13:48:11.604830 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-07-12 13:48:11.604845 | orchestrator | Saturday 12 July 2025 13:46:31 +0000 (0:00:00.580) 0:00:42.513 ********* 2025-07-12 13:48:11.604856 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:48:11.604866 | orchestrator | 2025-07-12 13:48:11.604877 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-07-12 13:48:11.605005 | orchestrator | Saturday 12 July 2025 13:46:31 +0000 (0:00:00.248) 0:00:42.762 ********* 2025-07-12 13:48:11.605016 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:48:11.605026 | orchestrator | 2025-07-12 13:48:11.605036 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-07-12 13:48:11.605047 | orchestrator | Saturday 12 July 2025 13:46:33 +0000 (0:00:01.742) 0:00:44.505 ********* 2025-07-12 13:48:11.605057 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:48:11.605068 | orchestrator | 2025-07-12 13:48:11.605079 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-07-12 13:48:11.605090 | orchestrator | 2025-07-12 13:48:11.605100 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-07-12 13:48:11.605111 | orchestrator | Saturday 12 July 2025 13:47:29 +0000 (0:00:56.313) 0:01:40.818 ********* 2025-07-12 13:48:11.605122 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:48:11.605132 | orchestrator | 2025-07-12 13:48:11.605143 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-07-12 13:48:11.605153 | orchestrator | Saturday 12 July 2025 13:47:30 +0000 (0:00:00.639) 0:01:41.457 ********* 2025-07-12 13:48:11.605164 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:48:11.605174 | orchestrator | 2025-07-12 13:48:11.605185 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-07-12 13:48:11.605205 | orchestrator | Saturday 12 July 2025 13:47:30 +0000 (0:00:00.406) 0:01:41.864 ********* 2025-07-12 13:48:11.605216 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:48:11.605226 | orchestrator | 2025-07-12 13:48:11.605237 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-07-12 13:48:11.605248 | orchestrator | Saturday 12 July 2025 13:47:38 +0000 (0:00:07.223) 0:01:49.087 ********* 2025-07-12 13:48:11.605258 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:48:11.605269 | orchestrator | 2025-07-12 13:48:11.605279 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-07-12 13:48:11.605290 | orchestrator | 2025-07-12 13:48:11.605301 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-07-12 13:48:11.605311 | orchestrator | Saturday 12 July 2025 13:47:48 +0000 (0:00:10.340) 0:01:59.427 ********* 2025-07-12 13:48:11.605321 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:48:11.605332 | orchestrator | 2025-07-12 13:48:11.605342 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-07-12 13:48:11.605353 | orchestrator | Saturday 12 July 2025 13:47:49 +0000 (0:00:00.592) 0:02:00.020 ********* 2025-07-12 13:48:11.605364 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:48:11.605374 | orchestrator | 2025-07-12 13:48:11.605385 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-07-12 13:48:11.605404 | orchestrator | Saturday 12 July 2025 13:47:49 +0000 (0:00:00.224) 0:02:00.244 ********* 2025-07-12 13:48:11.605415 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:48:11.605426 | orchestrator | 2025-07-12 13:48:11.605436 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-07-12 13:48:11.605447 | orchestrator | Saturday 12 July 2025 13:47:50 +0000 (0:00:01.704) 0:02:01.949 ********* 2025-07-12 13:48:11.605457 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:48:11.605468 | orchestrator | 2025-07-12 13:48:11.605479 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-07-12 13:48:11.605489 | orchestrator | 2025-07-12 13:48:11.605500 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-07-12 13:48:11.605510 | orchestrator | Saturday 12 July 2025 13:48:06 +0000 (0:00:15.230) 0:02:17.179 ********* 2025-07-12 13:48:11.605521 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:48:11.605531 | orchestrator | 2025-07-12 13:48:11.605542 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-07-12 13:48:11.605552 | orchestrator | Saturday 12 July 2025 13:48:06 +0000 (0:00:00.702) 0:02:17.882 ********* 2025-07-12 13:48:11.605563 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-07-12 13:48:11.605573 | orchestrator | enable_outward_rabbitmq_True 2025-07-12 13:48:11.605584 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-07-12 13:48:11.605594 | orchestrator | outward_rabbitmq_restart 2025-07-12 13:48:11.605605 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:48:11.605615 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:48:11.605626 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:48:11.605636 | orchestrator | 2025-07-12 13:48:11.605647 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-07-12 13:48:11.605657 | orchestrator | skipping: no hosts matched 2025-07-12 13:48:11.605668 | orchestrator | 2025-07-12 13:48:11.605679 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-07-12 13:48:11.605689 | orchestrator | skipping: no hosts matched 2025-07-12 13:48:11.605700 | orchestrator | 2025-07-12 13:48:11.605710 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-07-12 13:48:11.605721 | orchestrator | skipping: no hosts matched 2025-07-12 13:48:11.605731 | orchestrator | 2025-07-12 13:48:11.605742 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:48:11.605753 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-07-12 13:48:11.605841 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-12 13:48:11.605856 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:48:11.605873 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:48:11.605884 | orchestrator | 2025-07-12 13:48:11.605895 | orchestrator | 2025-07-12 13:48:11.605905 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:48:11.605916 | orchestrator | Saturday 12 July 2025 13:48:09 +0000 (0:00:02.529) 0:02:20.411 ********* 2025-07-12 13:48:11.605926 | orchestrator | =============================================================================== 2025-07-12 13:48:11.605937 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 81.88s 2025-07-12 13:48:11.605947 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.67s 2025-07-12 13:48:11.605958 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.36s 2025-07-12 13:48:11.605968 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 3.26s 2025-07-12 13:48:11.605978 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.00s 2025-07-12 13:48:11.605989 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.94s 2025-07-12 13:48:11.605999 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.53s 2025-07-12 13:48:11.606009 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.49s 2025-07-12 13:48:11.606073 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.43s 2025-07-12 13:48:11.606085 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.19s 2025-07-12 13:48:11.606095 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.90s 2025-07-12 13:48:11.606106 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.82s 2025-07-12 13:48:11.606117 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.81s 2025-07-12 13:48:11.606127 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.54s 2025-07-12 13:48:11.606138 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.44s 2025-07-12 13:48:11.606149 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.19s 2025-07-12 13:48:11.606159 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.19s 2025-07-12 13:48:11.606170 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.09s 2025-07-12 13:48:11.606180 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.95s 2025-07-12 13:48:11.606191 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.89s 2025-07-12 13:48:11.606210 | orchestrator | 2025-07-12 13:48:11 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:48:11.606222 | orchestrator | 2025-07-12 13:48:11 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:48:11.607169 | orchestrator | 2025-07-12 13:48:11 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:48:11.607451 | orchestrator | 2025-07-12 13:48:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:14.655230 | orchestrator | 2025-07-12 13:48:14 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:48:14.656355 | orchestrator | 2025-07-12 13:48:14 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:48:14.659336 | orchestrator | 2025-07-12 13:48:14 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:48:14.660039 | orchestrator | 2025-07-12 13:48:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:17.704108 | orchestrator | 2025-07-12 13:48:17 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:48:17.704870 | orchestrator | 2025-07-12 13:48:17 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:48:17.706581 | orchestrator | 2025-07-12 13:48:17 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:48:17.706806 | orchestrator | 2025-07-12 13:48:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:20.745445 | orchestrator | 2025-07-12 13:48:20 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:48:20.746207 | orchestrator | 2025-07-12 13:48:20 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:48:20.746242 | orchestrator | 2025-07-12 13:48:20 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:48:20.746257 | orchestrator | 2025-07-12 13:48:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:23.785101 | orchestrator | 2025-07-12 13:48:23 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:48:23.787224 | orchestrator | 2025-07-12 13:48:23 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:48:23.791220 | orchestrator | 2025-07-12 13:48:23 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:48:23.791971 | orchestrator | 2025-07-12 13:48:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:26.837297 | orchestrator | 2025-07-12 13:48:26 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:48:26.837414 | orchestrator | 2025-07-12 13:48:26 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:48:26.838202 | orchestrator | 2025-07-12 13:48:26 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:48:26.838228 | orchestrator | 2025-07-12 13:48:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:29.870334 | orchestrator | 2025-07-12 13:48:29 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:48:29.872508 | orchestrator | 2025-07-12 13:48:29 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:48:29.872544 | orchestrator | 2025-07-12 13:48:29 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:48:29.872557 | orchestrator | 2025-07-12 13:48:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:32.914089 | orchestrator | 2025-07-12 13:48:32 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:48:32.914200 | orchestrator | 2025-07-12 13:48:32 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:48:32.914598 | orchestrator | 2025-07-12 13:48:32 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:48:32.914622 | orchestrator | 2025-07-12 13:48:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:35.960472 | orchestrator | 2025-07-12 13:48:35 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:48:35.962679 | orchestrator | 2025-07-12 13:48:35 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:48:35.964664 | orchestrator | 2025-07-12 13:48:35 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:48:35.964810 | orchestrator | 2025-07-12 13:48:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:39.004316 | orchestrator | 2025-07-12 13:48:39 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:48:39.006330 | orchestrator | 2025-07-12 13:48:39 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:48:39.008938 | orchestrator | 2025-07-12 13:48:39 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:48:39.008990 | orchestrator | 2025-07-12 13:48:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:42.071249 | orchestrator | 2025-07-12 13:48:42 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:48:42.073270 | orchestrator | 2025-07-12 13:48:42 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:48:42.075153 | orchestrator | 2025-07-12 13:48:42 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:48:42.075178 | orchestrator | 2025-07-12 13:48:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:45.125304 | orchestrator | 2025-07-12 13:48:45 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:48:45.126772 | orchestrator | 2025-07-12 13:48:45 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:48:45.127372 | orchestrator | 2025-07-12 13:48:45 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:48:45.127387 | orchestrator | 2025-07-12 13:48:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:48.176801 | orchestrator | 2025-07-12 13:48:48 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:48:48.178549 | orchestrator | 2025-07-12 13:48:48 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:48:48.183008 | orchestrator | 2025-07-12 13:48:48 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:48:48.183129 | orchestrator | 2025-07-12 13:48:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:51.238924 | orchestrator | 2025-07-12 13:48:51 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:48:51.239032 | orchestrator | 2025-07-12 13:48:51 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:48:51.239388 | orchestrator | 2025-07-12 13:48:51 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:48:51.239412 | orchestrator | 2025-07-12 13:48:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:54.274056 | orchestrator | 2025-07-12 13:48:54 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:48:54.275775 | orchestrator | 2025-07-12 13:48:54 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:48:54.279021 | orchestrator | 2025-07-12 13:48:54 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:48:54.279044 | orchestrator | 2025-07-12 13:48:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:57.324227 | orchestrator | 2025-07-12 13:48:57 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:48:57.324903 | orchestrator | 2025-07-12 13:48:57 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:48:57.328250 | orchestrator | 2025-07-12 13:48:57 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:48:57.328283 | orchestrator | 2025-07-12 13:48:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:00.375236 | orchestrator | 2025-07-12 13:49:00 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:49:00.377340 | orchestrator | 2025-07-12 13:49:00 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:49:00.379351 | orchestrator | 2025-07-12 13:49:00 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:49:00.379394 | orchestrator | 2025-07-12 13:49:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:03.420364 | orchestrator | 2025-07-12 13:49:03 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:49:03.422231 | orchestrator | 2025-07-12 13:49:03 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:49:03.422300 | orchestrator | 2025-07-12 13:49:03 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state STARTED 2025-07-12 13:49:03.422312 | orchestrator | 2025-07-12 13:49:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:06.475179 | orchestrator | 2025-07-12 13:49:06 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:49:06.478242 | orchestrator | 2025-07-12 13:49:06 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:49:06.482524 | orchestrator | 2025-07-12 13:49:06 | INFO  | Task 1565ddc5-b47f-4855-b487-bef57c856337 is in state SUCCESS 2025-07-12 13:49:06.482558 | orchestrator | 2025-07-12 13:49:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:06.483536 | orchestrator | 2025-07-12 13:49:06.483566 | orchestrator | 2025-07-12 13:49:06.483742 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 13:49:06.483755 | orchestrator | 2025-07-12 13:49:06.483766 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 13:49:06.483778 | orchestrator | Saturday 12 July 2025 13:46:40 +0000 (0:00:00.168) 0:00:00.168 ********* 2025-07-12 13:49:06.483789 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:49:06.483801 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:49:06.483812 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:49:06.483823 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:06.483833 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:06.483844 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:06.483855 | orchestrator | 2025-07-12 13:49:06.484144 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 13:49:06.484160 | orchestrator | Saturday 12 July 2025 13:46:41 +0000 (0:00:00.706) 0:00:00.875 ********* 2025-07-12 13:49:06.484171 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-07-12 13:49:06.484182 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-07-12 13:49:06.484193 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-07-12 13:49:06.484204 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-07-12 13:49:06.484215 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-07-12 13:49:06.484226 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-07-12 13:49:06.484236 | orchestrator | 2025-07-12 13:49:06.484247 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-07-12 13:49:06.484258 | orchestrator | 2025-07-12 13:49:06.484269 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-07-12 13:49:06.484280 | orchestrator | Saturday 12 July 2025 13:46:42 +0000 (0:00:00.914) 0:00:01.789 ********* 2025-07-12 13:49:06.484292 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:49:06.484304 | orchestrator | 2025-07-12 13:49:06.484315 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-07-12 13:49:06.484326 | orchestrator | Saturday 12 July 2025 13:46:44 +0000 (0:00:01.433) 0:00:03.223 ********* 2025-07-12 13:49:06.484408 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.484425 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.484437 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.484449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.484460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.484566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.484589 | orchestrator | 2025-07-12 13:49:06.484601 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-07-12 13:49:06.484612 | orchestrator | Saturday 12 July 2025 13:46:45 +0000 (0:00:01.862) 0:00:05.086 ********* 2025-07-12 13:49:06.484623 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.484634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.484645 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.484671 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.484682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.484693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.484742 | orchestrator | 2025-07-12 13:49:06.484754 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-07-12 13:49:06.484767 | orchestrator | Saturday 12 July 2025 13:46:47 +0000 (0:00:01.579) 0:00:06.666 ********* 2025-07-12 13:49:06.484780 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.484793 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.484817 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.484831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.484844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.484863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.484875 | orchestrator | 2025-07-12 13:49:06.484893 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-07-12 13:49:06.484906 | orchestrator | Saturday 12 July 2025 13:46:48 +0000 (0:00:01.425) 0:00:08.091 ********* 2025-07-12 13:49:06.484918 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.484931 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.484944 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.484957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.484969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.484989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.485002 | orchestrator | 2025-07-12 13:49:06.485014 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-07-12 13:49:06.485033 | orchestrator | Saturday 12 July 2025 13:46:50 +0000 (0:00:01.697) 0:00:09.788 ********* 2025-07-12 13:49:06.485045 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.485058 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.485075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.485089 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.485102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.485114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.485125 | orchestrator | 2025-07-12 13:49:06.485136 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-07-12 13:49:06.485146 | orchestrator | Saturday 12 July 2025 13:46:52 +0000 (0:00:01.569) 0:00:11.358 ********* 2025-07-12 13:49:06.485157 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:49:06.485169 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:49:06.485179 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:49:06.485190 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:49:06.485201 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:49:06.485211 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:49:06.485222 | orchestrator | 2025-07-12 13:49:06.485232 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-07-12 13:49:06.485243 | orchestrator | Saturday 12 July 2025 13:46:54 +0000 (0:00:02.449) 0:00:13.807 ********* 2025-07-12 13:49:06.485254 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-07-12 13:49:06.485265 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-07-12 13:49:06.485290 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-07-12 13:49:06.485307 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-07-12 13:49:06.485319 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-07-12 13:49:06.485329 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-07-12 13:49:06.485340 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-12 13:49:06.485350 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-12 13:49:06.485361 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-12 13:49:06.485371 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-12 13:49:06.485382 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-12 13:49:06.485392 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-12 13:49:06.485403 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-12 13:49:06.485414 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-12 13:49:06.485425 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-12 13:49:06.485436 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-12 13:49:06.485447 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-12 13:49:06.485462 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-12 13:49:06.485473 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-12 13:49:06.485484 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-12 13:49:06.485495 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-12 13:49:06.485506 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-12 13:49:06.485516 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-12 13:49:06.485527 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-12 13:49:06.485537 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-12 13:49:06.485548 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-12 13:49:06.485558 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-12 13:49:06.485569 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-12 13:49:06.485580 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-12 13:49:06.485590 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-12 13:49:06.485601 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-12 13:49:06.485611 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-12 13:49:06.485628 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-12 13:49:06.485639 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-12 13:49:06.485649 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-12 13:49:06.485660 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-12 13:49:06.485671 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-07-12 13:49:06.485682 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-07-12 13:49:06.485692 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-07-12 13:49:06.485719 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-07-12 13:49:06.485737 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-07-12 13:49:06.485748 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-07-12 13:49:06.485759 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-07-12 13:49:06.485771 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-07-12 13:49:06.485782 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-07-12 13:49:06.485792 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-07-12 13:49:06.485803 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-07-12 13:49:06.485814 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-07-12 13:49:06.485824 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-07-12 13:49:06.485835 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-07-12 13:49:06.485846 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-07-12 13:49:06.485857 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-07-12 13:49:06.485868 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-07-12 13:49:06.485883 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-07-12 13:49:06.485894 | orchestrator | 2025-07-12 13:49:06.485905 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-12 13:49:06.485916 | orchestrator | Saturday 12 July 2025 13:47:14 +0000 (0:00:19.588) 0:00:33.395 ********* 2025-07-12 13:49:06.485927 | orchestrator | 2025-07-12 13:49:06.485937 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-12 13:49:06.485948 | orchestrator | Saturday 12 July 2025 13:47:14 +0000 (0:00:00.064) 0:00:33.459 ********* 2025-07-12 13:49:06.485959 | orchestrator | 2025-07-12 13:49:06.485970 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-12 13:49:06.485980 | orchestrator | Saturday 12 July 2025 13:47:14 +0000 (0:00:00.064) 0:00:33.523 ********* 2025-07-12 13:49:06.485998 | orchestrator | 2025-07-12 13:49:06.486008 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-12 13:49:06.486087 | orchestrator | Saturday 12 July 2025 13:47:14 +0000 (0:00:00.065) 0:00:33.589 ********* 2025-07-12 13:49:06.486102 | orchestrator | 2025-07-12 13:49:06.486113 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-12 13:49:06.486123 | orchestrator | Saturday 12 July 2025 13:47:14 +0000 (0:00:00.139) 0:00:33.729 ********* 2025-07-12 13:49:06.486134 | orchestrator | 2025-07-12 13:49:06.486145 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-12 13:49:06.486156 | orchestrator | Saturday 12 July 2025 13:47:14 +0000 (0:00:00.105) 0:00:33.835 ********* 2025-07-12 13:49:06.486166 | orchestrator | 2025-07-12 13:49:06.486177 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-07-12 13:49:06.486188 | orchestrator | Saturday 12 July 2025 13:47:14 +0000 (0:00:00.093) 0:00:33.928 ********* 2025-07-12 13:49:06.486198 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:49:06.486209 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:06.486219 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:49:06.486230 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:49:06.486241 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:06.486251 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:06.486262 | orchestrator | 2025-07-12 13:49:06.486273 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-07-12 13:49:06.486283 | orchestrator | Saturday 12 July 2025 13:47:16 +0000 (0:00:02.114) 0:00:36.043 ********* 2025-07-12 13:49:06.486294 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:49:06.486305 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:49:06.486316 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:49:06.486326 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:49:06.486337 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:49:06.486347 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:49:06.486358 | orchestrator | 2025-07-12 13:49:06.486369 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-07-12 13:49:06.486379 | orchestrator | 2025-07-12 13:49:06.486390 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-07-12 13:49:06.486401 | orchestrator | Saturday 12 July 2025 13:47:52 +0000 (0:00:35.731) 0:01:11.774 ********* 2025-07-12 13:49:06.486411 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:49:06.486422 | orchestrator | 2025-07-12 13:49:06.486433 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-07-12 13:49:06.486443 | orchestrator | Saturday 12 July 2025 13:47:53 +0000 (0:00:00.516) 0:01:12.291 ********* 2025-07-12 13:49:06.486454 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:49:06.486465 | orchestrator | 2025-07-12 13:49:06.486483 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-07-12 13:49:06.486494 | orchestrator | Saturday 12 July 2025 13:47:53 +0000 (0:00:00.659) 0:01:12.951 ********* 2025-07-12 13:49:06.486505 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:06.486515 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:06.486526 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:06.486537 | orchestrator | 2025-07-12 13:49:06.486548 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-07-12 13:49:06.486558 | orchestrator | Saturday 12 July 2025 13:47:54 +0000 (0:00:00.772) 0:01:13.723 ********* 2025-07-12 13:49:06.486569 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:06.486580 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:06.486590 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:06.486601 | orchestrator | 2025-07-12 13:49:06.486611 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-07-12 13:49:06.486622 | orchestrator | Saturday 12 July 2025 13:47:54 +0000 (0:00:00.355) 0:01:14.079 ********* 2025-07-12 13:49:06.486639 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:06.486658 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:06.486676 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:06.486687 | orchestrator | 2025-07-12 13:49:06.486715 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-07-12 13:49:06.486727 | orchestrator | Saturday 12 July 2025 13:47:55 +0000 (0:00:00.406) 0:01:14.485 ********* 2025-07-12 13:49:06.486738 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:06.486748 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:06.486759 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:06.486769 | orchestrator | 2025-07-12 13:49:06.486780 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-07-12 13:49:06.486791 | orchestrator | Saturday 12 July 2025 13:47:55 +0000 (0:00:00.666) 0:01:15.152 ********* 2025-07-12 13:49:06.486801 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:06.486812 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:06.486822 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:06.486833 | orchestrator | 2025-07-12 13:49:06.486844 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-07-12 13:49:06.486855 | orchestrator | Saturday 12 July 2025 13:47:56 +0000 (0:00:00.443) 0:01:15.596 ********* 2025-07-12 13:49:06.486865 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:06.486876 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:06.486887 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:06.486897 | orchestrator | 2025-07-12 13:49:06.486908 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-07-12 13:49:06.486924 | orchestrator | Saturday 12 July 2025 13:47:56 +0000 (0:00:00.278) 0:01:15.875 ********* 2025-07-12 13:49:06.486935 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:06.486946 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:06.486956 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:06.486967 | orchestrator | 2025-07-12 13:49:06.486978 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-07-12 13:49:06.486989 | orchestrator | Saturday 12 July 2025 13:47:56 +0000 (0:00:00.294) 0:01:16.169 ********* 2025-07-12 13:49:06.486999 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:06.487010 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:06.487021 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:06.487031 | orchestrator | 2025-07-12 13:49:06.487042 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-07-12 13:49:06.487053 | orchestrator | Saturday 12 July 2025 13:47:57 +0000 (0:00:00.478) 0:01:16.647 ********* 2025-07-12 13:49:06.487063 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:06.487074 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:06.487085 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:06.487095 | orchestrator | 2025-07-12 13:49:06.487106 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-07-12 13:49:06.487117 | orchestrator | Saturday 12 July 2025 13:47:57 +0000 (0:00:00.301) 0:01:16.948 ********* 2025-07-12 13:49:06.487127 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:06.487138 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:06.487149 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:06.487159 | orchestrator | 2025-07-12 13:49:06.487170 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-07-12 13:49:06.487180 | orchestrator | Saturday 12 July 2025 13:47:58 +0000 (0:00:00.288) 0:01:17.237 ********* 2025-07-12 13:49:06.487191 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:06.487202 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:06.487212 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:06.487223 | orchestrator | 2025-07-12 13:49:06.487233 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-07-12 13:49:06.487244 | orchestrator | Saturday 12 July 2025 13:47:58 +0000 (0:00:00.309) 0:01:17.547 ********* 2025-07-12 13:49:06.487255 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:06.487265 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:06.487287 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:06.487298 | orchestrator | 2025-07-12 13:49:06.487309 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-07-12 13:49:06.487319 | orchestrator | Saturday 12 July 2025 13:47:58 +0000 (0:00:00.510) 0:01:18.057 ********* 2025-07-12 13:49:06.487330 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:06.487341 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:06.487351 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:06.487362 | orchestrator | 2025-07-12 13:49:06.487373 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-07-12 13:49:06.487383 | orchestrator | Saturday 12 July 2025 13:47:59 +0000 (0:00:00.338) 0:01:18.396 ********* 2025-07-12 13:49:06.487394 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:06.487404 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:06.487415 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:06.487426 | orchestrator | 2025-07-12 13:49:06.487436 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-07-12 13:49:06.487447 | orchestrator | Saturday 12 July 2025 13:47:59 +0000 (0:00:00.319) 0:01:18.716 ********* 2025-07-12 13:49:06.487458 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:06.487468 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:06.487479 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:06.487490 | orchestrator | 2025-07-12 13:49:06.487507 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-07-12 13:49:06.487518 | orchestrator | Saturday 12 July 2025 13:47:59 +0000 (0:00:00.306) 0:01:19.022 ********* 2025-07-12 13:49:06.487529 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:06.487539 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:06.487550 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:06.487560 | orchestrator | 2025-07-12 13:49:06.487571 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-07-12 13:49:06.487582 | orchestrator | Saturday 12 July 2025 13:48:00 +0000 (0:00:00.523) 0:01:19.546 ********* 2025-07-12 13:49:06.487592 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:06.487603 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:06.487614 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:06.487624 | orchestrator | 2025-07-12 13:49:06.487635 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-07-12 13:49:06.487646 | orchestrator | Saturday 12 July 2025 13:48:00 +0000 (0:00:00.307) 0:01:19.854 ********* 2025-07-12 13:49:06.487657 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:49:06.487668 | orchestrator | 2025-07-12 13:49:06.487679 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-07-12 13:49:06.487689 | orchestrator | Saturday 12 July 2025 13:48:01 +0000 (0:00:00.611) 0:01:20.465 ********* 2025-07-12 13:49:06.487718 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:06.487730 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:06.487740 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:06.487751 | orchestrator | 2025-07-12 13:49:06.487762 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-07-12 13:49:06.487772 | orchestrator | Saturday 12 July 2025 13:48:02 +0000 (0:00:00.863) 0:01:21.329 ********* 2025-07-12 13:49:06.487783 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:06.487794 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:06.487804 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:06.487815 | orchestrator | 2025-07-12 13:49:06.487825 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-07-12 13:49:06.487836 | orchestrator | Saturday 12 July 2025 13:48:02 +0000 (0:00:00.451) 0:01:21.780 ********* 2025-07-12 13:49:06.487847 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:06.487857 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:06.487868 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:06.487885 | orchestrator | 2025-07-12 13:49:06.487896 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-07-12 13:49:06.487911 | orchestrator | Saturday 12 July 2025 13:48:02 +0000 (0:00:00.360) 0:01:22.141 ********* 2025-07-12 13:49:06.487922 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:06.487932 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:06.487943 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:06.487953 | orchestrator | 2025-07-12 13:49:06.487964 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-07-12 13:49:06.487975 | orchestrator | Saturday 12 July 2025 13:48:03 +0000 (0:00:00.401) 0:01:22.542 ********* 2025-07-12 13:49:06.487986 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:06.487996 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:06.488007 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:06.488017 | orchestrator | 2025-07-12 13:49:06.488028 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-07-12 13:49:06.488039 | orchestrator | Saturday 12 July 2025 13:48:04 +0000 (0:00:00.681) 0:01:23.224 ********* 2025-07-12 13:49:06.488050 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:06.488060 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:06.488071 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:06.488081 | orchestrator | 2025-07-12 13:49:06.488092 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-07-12 13:49:06.488102 | orchestrator | Saturday 12 July 2025 13:48:04 +0000 (0:00:00.346) 0:01:23.570 ********* 2025-07-12 13:49:06.488113 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:06.488123 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:06.488134 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:06.488144 | orchestrator | 2025-07-12 13:49:06.488155 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-07-12 13:49:06.488166 | orchestrator | Saturday 12 July 2025 13:48:04 +0000 (0:00:00.436) 0:01:24.007 ********* 2025-07-12 13:49:06.488176 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:06.488187 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:06.488198 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:06.488208 | orchestrator | 2025-07-12 13:49:06.488219 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-07-12 13:49:06.488230 | orchestrator | Saturday 12 July 2025 13:48:05 +0000 (0:00:00.437) 0:01:24.445 ********* 2025-07-12 13:49:06.488241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.488254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.488272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.488285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.488305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.488316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.488331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.488342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.488353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.488364 | orchestrator | 2025-07-12 13:49:06.488375 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-07-12 13:49:06.488386 | orchestrator | Saturday 12 July 2025 13:48:06 +0000 (0:00:01.569) 0:01:26.014 ********* 2025-07-12 13:49:06.488397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.488408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.488420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.488437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.488457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.488468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.488479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.488494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.488505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.488516 | orchestrator | 2025-07-12 13:49:06.488527 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-07-12 13:49:06.488538 | orchestrator | Saturday 12 July 2025 13:48:10 +0000 (0:00:04.010) 0:01:30.025 ********* 2025-07-12 13:49:06.488549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.488560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.488571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.488589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.488611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.488622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.488633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.488648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.488660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.488671 | orchestrator | 2025-07-12 13:49:06.488682 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-12 13:49:06.488692 | orchestrator | Saturday 12 July 2025 13:48:12 +0000 (0:00:02.073) 0:01:32.098 ********* 2025-07-12 13:49:06.488734 | orchestrator | 2025-07-12 13:49:06.488745 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-12 13:49:06.488756 | orchestrator | Saturday 12 July 2025 13:48:12 +0000 (0:00:00.066) 0:01:32.164 ********* 2025-07-12 13:49:06.488767 | orchestrator | 2025-07-12 13:49:06.488778 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-12 13:49:06.488789 | orchestrator | Saturday 12 July 2025 13:48:13 +0000 (0:00:00.063) 0:01:32.228 ********* 2025-07-12 13:49:06.488799 | orchestrator | 2025-07-12 13:49:06.488810 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-07-12 13:49:06.488821 | orchestrator | Saturday 12 July 2025 13:48:13 +0000 (0:00:00.066) 0:01:32.294 ********* 2025-07-12 13:49:06.488831 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:49:06.488842 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:49:06.488853 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:49:06.488863 | orchestrator | 2025-07-12 13:49:06.488874 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-07-12 13:49:06.488885 | orchestrator | Saturday 12 July 2025 13:48:19 +0000 (0:00:06.467) 0:01:38.761 ********* 2025-07-12 13:49:06.488895 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:49:06.488906 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:49:06.488924 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:49:06.488934 | orchestrator | 2025-07-12 13:49:06.488945 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-07-12 13:49:06.488956 | orchestrator | Saturday 12 July 2025 13:48:22 +0000 (0:00:02.685) 0:01:41.446 ********* 2025-07-12 13:49:06.488966 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:49:06.488977 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:49:06.488988 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:49:06.488998 | orchestrator | 2025-07-12 13:49:06.489009 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-07-12 13:49:06.489020 | orchestrator | Saturday 12 July 2025 13:48:24 +0000 (0:00:02.495) 0:01:43.942 ********* 2025-07-12 13:49:06.489030 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:06.489041 | orchestrator | 2025-07-12 13:49:06.489052 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-07-12 13:49:06.489062 | orchestrator | Saturday 12 July 2025 13:48:24 +0000 (0:00:00.134) 0:01:44.076 ********* 2025-07-12 13:49:06.489073 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:06.489084 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:06.489095 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:06.489105 | orchestrator | 2025-07-12 13:49:06.489122 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-07-12 13:49:06.489133 | orchestrator | Saturday 12 July 2025 13:48:25 +0000 (0:00:01.031) 0:01:45.108 ********* 2025-07-12 13:49:06.489144 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:06.489155 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:06.489165 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:49:06.489176 | orchestrator | 2025-07-12 13:49:06.489187 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-07-12 13:49:06.489197 | orchestrator | Saturday 12 July 2025 13:48:26 +0000 (0:00:00.943) 0:01:46.051 ********* 2025-07-12 13:49:06.489208 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:06.489219 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:06.489230 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:06.489240 | orchestrator | 2025-07-12 13:49:06.489251 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-07-12 13:49:06.489262 | orchestrator | Saturday 12 July 2025 13:48:27 +0000 (0:00:00.839) 0:01:46.890 ********* 2025-07-12 13:49:06.489273 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:06.489283 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:06.489294 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:49:06.489304 | orchestrator | 2025-07-12 13:49:06.489315 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-07-12 13:49:06.489326 | orchestrator | Saturday 12 July 2025 13:48:28 +0000 (0:00:00.633) 0:01:47.524 ********* 2025-07-12 13:49:06.489336 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:06.489347 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:06.489357 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:06.489368 | orchestrator | 2025-07-12 13:49:06.489379 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-07-12 13:49:06.489389 | orchestrator | Saturday 12 July 2025 13:48:29 +0000 (0:00:00.833) 0:01:48.357 ********* 2025-07-12 13:49:06.489400 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:06.489411 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:06.489421 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:06.489432 | orchestrator | 2025-07-12 13:49:06.489442 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-07-12 13:49:06.489453 | orchestrator | Saturday 12 July 2025 13:48:30 +0000 (0:00:01.385) 0:01:49.743 ********* 2025-07-12 13:49:06.489464 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:06.489474 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:06.489485 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:06.489495 | orchestrator | 2025-07-12 13:49:06.489506 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-07-12 13:49:06.489517 | orchestrator | Saturday 12 July 2025 13:48:30 +0000 (0:00:00.344) 0:01:50.087 ********* 2025-07-12 13:49:06.489538 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.489550 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.489562 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.489573 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.489584 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.489595 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.489612 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.489624 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.489635 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.489646 | orchestrator | 2025-07-12 13:49:06.489656 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-07-12 13:49:06.489673 | orchestrator | Saturday 12 July 2025 13:48:32 +0000 (0:00:01.430) 0:01:51.518 ********* 2025-07-12 13:49:06.489689 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.489717 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.489729 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.489740 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.489751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.489762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.489786 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.489798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.489809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.489820 | orchestrator | 2025-07-12 13:49:06.489838 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-07-12 13:49:06.489848 | orchestrator | Saturday 12 July 2025 13:48:36 +0000 (0:00:04.411) 0:01:55.930 ********* 2025-07-12 13:49:06.489860 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.489879 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.489890 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.489901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.489912 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.489923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.489934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.489978 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.489991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:06.490008 | orchestrator | 2025-07-12 13:49:06.490047 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-12 13:49:06.490060 | orchestrator | Saturday 12 July 2025 13:48:39 +0000 (0:00:03.077) 0:01:59.007 ********* 2025-07-12 13:49:06.490071 | orchestrator | 2025-07-12 13:49:06.490082 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-12 13:49:06.490093 | orchestrator | Saturday 12 July 2025 13:48:39 +0000 (0:00:00.065) 0:01:59.072 ********* 2025-07-12 13:49:06.490104 | orchestrator | 2025-07-12 13:49:06.490114 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-12 13:49:06.490125 | orchestrator | Saturday 12 July 2025 13:48:39 +0000 (0:00:00.066) 0:01:59.139 ********* 2025-07-12 13:49:06.490136 | orchestrator | 2025-07-12 13:49:06.490146 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-07-12 13:49:06.490157 | orchestrator | Saturday 12 July 2025 13:48:40 +0000 (0:00:00.067) 0:01:59.206 ********* 2025-07-12 13:49:06.490167 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:49:06.490178 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:49:06.490188 | orchestrator | 2025-07-12 13:49:06.490199 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-07-12 13:49:06.490210 | orchestrator | Saturday 12 July 2025 13:48:46 +0000 (0:00:06.306) 0:02:05.512 ********* 2025-07-12 13:49:06.490220 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:49:06.490231 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:49:06.490241 | orchestrator | 2025-07-12 13:49:06.490257 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-07-12 13:49:06.490268 | orchestrator | Saturday 12 July 2025 13:48:52 +0000 (0:00:06.163) 0:02:11.676 ********* 2025-07-12 13:49:06.490279 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:49:06.490289 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:49:06.490300 | orchestrator | 2025-07-12 13:49:06.490310 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-07-12 13:49:06.490321 | orchestrator | Saturday 12 July 2025 13:48:58 +0000 (0:00:06.281) 0:02:17.957 ********* 2025-07-12 13:49:06.490332 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:06.490342 | orchestrator | 2025-07-12 13:49:06.490353 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-07-12 13:49:06.490364 | orchestrator | Saturday 12 July 2025 13:48:58 +0000 (0:00:00.150) 0:02:18.108 ********* 2025-07-12 13:49:06.490375 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:06.490385 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:06.490396 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:06.490406 | orchestrator | 2025-07-12 13:49:06.490417 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-07-12 13:49:06.490427 | orchestrator | Saturday 12 July 2025 13:49:00 +0000 (0:00:01.118) 0:02:19.226 ********* 2025-07-12 13:49:06.490438 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:06.490449 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:06.490459 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:49:06.490469 | orchestrator | 2025-07-12 13:49:06.490480 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-07-12 13:49:06.490491 | orchestrator | Saturday 12 July 2025 13:49:00 +0000 (0:00:00.641) 0:02:19.868 ********* 2025-07-12 13:49:06.490502 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:06.490512 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:06.490523 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:06.490533 | orchestrator | 2025-07-12 13:49:06.490544 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-07-12 13:49:06.490554 | orchestrator | Saturday 12 July 2025 13:49:01 +0000 (0:00:00.808) 0:02:20.676 ********* 2025-07-12 13:49:06.490565 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:06.490575 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:06.490586 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:49:06.490596 | orchestrator | 2025-07-12 13:49:06.490608 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-07-12 13:49:06.490625 | orchestrator | Saturday 12 July 2025 13:49:02 +0000 (0:00:00.621) 0:02:21.297 ********* 2025-07-12 13:49:06.490635 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:06.490646 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:06.490657 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:06.490667 | orchestrator | 2025-07-12 13:49:06.490678 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-07-12 13:49:06.490689 | orchestrator | Saturday 12 July 2025 13:49:03 +0000 (0:00:01.180) 0:02:22.478 ********* 2025-07-12 13:49:06.490751 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:06.490764 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:06.490775 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:06.490785 | orchestrator | 2025-07-12 13:49:06.490796 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:49:06.490807 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-07-12 13:49:06.490818 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-07-12 13:49:06.490837 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-07-12 13:49:06.490848 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:49:06.490859 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:49:06.490870 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:49:06.490880 | orchestrator | 2025-07-12 13:49:06.490891 | orchestrator | 2025-07-12 13:49:06.490902 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:49:06.490913 | orchestrator | Saturday 12 July 2025 13:49:04 +0000 (0:00:01.034) 0:02:23.513 ********* 2025-07-12 13:49:06.490923 | orchestrator | =============================================================================== 2025-07-12 13:49:06.490934 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 35.73s 2025-07-12 13:49:06.490944 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.59s 2025-07-12 13:49:06.490955 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 12.77s 2025-07-12 13:49:06.490965 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.85s 2025-07-12 13:49:06.490976 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.78s 2025-07-12 13:49:06.490986 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.41s 2025-07-12 13:49:06.490996 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.01s 2025-07-12 13:49:06.491007 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.08s 2025-07-12 13:49:06.491018 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.45s 2025-07-12 13:49:06.491028 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.11s 2025-07-12 13:49:06.491038 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.07s 2025-07-12 13:49:06.491049 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.86s 2025-07-12 13:49:06.491060 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.70s 2025-07-12 13:49:06.491069 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.58s 2025-07-12 13:49:06.491078 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.57s 2025-07-12 13:49:06.491088 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.57s 2025-07-12 13:49:06.491104 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.43s 2025-07-12 13:49:06.491113 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.43s 2025-07-12 13:49:06.491123 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.43s 2025-07-12 13:49:06.491132 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.39s 2025-07-12 13:49:09.538220 | orchestrator | 2025-07-12 13:49:09 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:49:09.541201 | orchestrator | 2025-07-12 13:49:09 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:49:09.541544 | orchestrator | 2025-07-12 13:49:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:12.593111 | orchestrator | 2025-07-12 13:49:12 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:49:12.594970 | orchestrator | 2025-07-12 13:49:12 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:49:12.595180 | orchestrator | 2025-07-12 13:49:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:15.639193 | orchestrator | 2025-07-12 13:49:15 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:49:15.641071 | orchestrator | 2025-07-12 13:49:15 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:49:15.641091 | orchestrator | 2025-07-12 13:49:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:18.694533 | orchestrator | 2025-07-12 13:49:18 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:49:18.697405 | orchestrator | 2025-07-12 13:49:18 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:49:18.697966 | orchestrator | 2025-07-12 13:49:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:21.739538 | orchestrator | 2025-07-12 13:49:21 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:49:21.740898 | orchestrator | 2025-07-12 13:49:21 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:49:21.740930 | orchestrator | 2025-07-12 13:49:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:24.787734 | orchestrator | 2025-07-12 13:49:24 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:49:24.788387 | orchestrator | 2025-07-12 13:49:24 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:49:24.788420 | orchestrator | 2025-07-12 13:49:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:27.833282 | orchestrator | 2025-07-12 13:49:27 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:49:27.834651 | orchestrator | 2025-07-12 13:49:27 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:49:27.834801 | orchestrator | 2025-07-12 13:49:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:30.880995 | orchestrator | 2025-07-12 13:49:30 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:49:30.883136 | orchestrator | 2025-07-12 13:49:30 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:49:30.883261 | orchestrator | 2025-07-12 13:49:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:33.932975 | orchestrator | 2025-07-12 13:49:33 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:49:33.933254 | orchestrator | 2025-07-12 13:49:33 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:49:33.933322 | orchestrator | 2025-07-12 13:49:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:36.969008 | orchestrator | 2025-07-12 13:49:36 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:49:36.969461 | orchestrator | 2025-07-12 13:49:36 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:49:36.969511 | orchestrator | 2025-07-12 13:49:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:40.022621 | orchestrator | 2025-07-12 13:49:40 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:49:40.023401 | orchestrator | 2025-07-12 13:49:40 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:49:40.023432 | orchestrator | 2025-07-12 13:49:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:43.075463 | orchestrator | 2025-07-12 13:49:43 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:49:43.076474 | orchestrator | 2025-07-12 13:49:43 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:49:43.076514 | orchestrator | 2025-07-12 13:49:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:46.123907 | orchestrator | 2025-07-12 13:49:46 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:49:46.126345 | orchestrator | 2025-07-12 13:49:46 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:49:46.127000 | orchestrator | 2025-07-12 13:49:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:49.172740 | orchestrator | 2025-07-12 13:49:49 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:49:49.172891 | orchestrator | 2025-07-12 13:49:49 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:49:49.172920 | orchestrator | 2025-07-12 13:49:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:52.207422 | orchestrator | 2025-07-12 13:49:52 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:49:52.209443 | orchestrator | 2025-07-12 13:49:52 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:49:52.209489 | orchestrator | 2025-07-12 13:49:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:55.246432 | orchestrator | 2025-07-12 13:49:55 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:49:55.247786 | orchestrator | 2025-07-12 13:49:55 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:49:55.247822 | orchestrator | 2025-07-12 13:49:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:58.286361 | orchestrator | 2025-07-12 13:49:58 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:49:58.286472 | orchestrator | 2025-07-12 13:49:58 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:49:58.286488 | orchestrator | 2025-07-12 13:49:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:01.322877 | orchestrator | 2025-07-12 13:50:01 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:50:01.323519 | orchestrator | 2025-07-12 13:50:01 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:50:01.323549 | orchestrator | 2025-07-12 13:50:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:04.361509 | orchestrator | 2025-07-12 13:50:04 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:50:04.362307 | orchestrator | 2025-07-12 13:50:04 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:50:04.362362 | orchestrator | 2025-07-12 13:50:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:07.409574 | orchestrator | 2025-07-12 13:50:07 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:50:07.412219 | orchestrator | 2025-07-12 13:50:07 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:50:07.412259 | orchestrator | 2025-07-12 13:50:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:10.471949 | orchestrator | 2025-07-12 13:50:10 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:50:10.473279 | orchestrator | 2025-07-12 13:50:10 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:50:10.474355 | orchestrator | 2025-07-12 13:50:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:13.511971 | orchestrator | 2025-07-12 13:50:13 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:50:13.516146 | orchestrator | 2025-07-12 13:50:13 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:50:13.516176 | orchestrator | 2025-07-12 13:50:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:16.560035 | orchestrator | 2025-07-12 13:50:16 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:50:16.561116 | orchestrator | 2025-07-12 13:50:16 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:50:16.561208 | orchestrator | 2025-07-12 13:50:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:19.606925 | orchestrator | 2025-07-12 13:50:19 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:50:19.609488 | orchestrator | 2025-07-12 13:50:19 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:50:19.609526 | orchestrator | 2025-07-12 13:50:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:22.666595 | orchestrator | 2025-07-12 13:50:22 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:50:22.667914 | orchestrator | 2025-07-12 13:50:22 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:50:22.667944 | orchestrator | 2025-07-12 13:50:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:25.720857 | orchestrator | 2025-07-12 13:50:25 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:50:25.723004 | orchestrator | 2025-07-12 13:50:25 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:50:25.723048 | orchestrator | 2025-07-12 13:50:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:28.766309 | orchestrator | 2025-07-12 13:50:28 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:50:28.767705 | orchestrator | 2025-07-12 13:50:28 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:50:28.767749 | orchestrator | 2025-07-12 13:50:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:31.821326 | orchestrator | 2025-07-12 13:50:31 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:50:31.822467 | orchestrator | 2025-07-12 13:50:31 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:50:31.822498 | orchestrator | 2025-07-12 13:50:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:34.866093 | orchestrator | 2025-07-12 13:50:34 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:50:34.868663 | orchestrator | 2025-07-12 13:50:34 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:50:34.869859 | orchestrator | 2025-07-12 13:50:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:37.923224 | orchestrator | 2025-07-12 13:50:37 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:50:37.924874 | orchestrator | 2025-07-12 13:50:37 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:50:37.924913 | orchestrator | 2025-07-12 13:50:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:40.974278 | orchestrator | 2025-07-12 13:50:40 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:50:40.975826 | orchestrator | 2025-07-12 13:50:40 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:50:40.975942 | orchestrator | 2025-07-12 13:50:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:44.033754 | orchestrator | 2025-07-12 13:50:44 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:50:44.033891 | orchestrator | 2025-07-12 13:50:44 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:50:44.033905 | orchestrator | 2025-07-12 13:50:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:47.078942 | orchestrator | 2025-07-12 13:50:47 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:50:47.080837 | orchestrator | 2025-07-12 13:50:47 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:50:47.081383 | orchestrator | 2025-07-12 13:50:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:50.126954 | orchestrator | 2025-07-12 13:50:50 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:50:50.128621 | orchestrator | 2025-07-12 13:50:50 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:50:50.129091 | orchestrator | 2025-07-12 13:50:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:53.176683 | orchestrator | 2025-07-12 13:50:53 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:50:53.178752 | orchestrator | 2025-07-12 13:50:53 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:50:53.179122 | orchestrator | 2025-07-12 13:50:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:56.227133 | orchestrator | 2025-07-12 13:50:56 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:50:56.227486 | orchestrator | 2025-07-12 13:50:56 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:50:56.227514 | orchestrator | 2025-07-12 13:50:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:59.276527 | orchestrator | 2025-07-12 13:50:59 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:50:59.277427 | orchestrator | 2025-07-12 13:50:59 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:50:59.277475 | orchestrator | 2025-07-12 13:50:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:02.325436 | orchestrator | 2025-07-12 13:51:02 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:51:02.326725 | orchestrator | 2025-07-12 13:51:02 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:51:02.327374 | orchestrator | 2025-07-12 13:51:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:05.379690 | orchestrator | 2025-07-12 13:51:05 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:51:05.380657 | orchestrator | 2025-07-12 13:51:05 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:51:05.380715 | orchestrator | 2025-07-12 13:51:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:08.430354 | orchestrator | 2025-07-12 13:51:08 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:51:08.430719 | orchestrator | 2025-07-12 13:51:08 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:51:08.431081 | orchestrator | 2025-07-12 13:51:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:11.474464 | orchestrator | 2025-07-12 13:51:11 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:51:11.475472 | orchestrator | 2025-07-12 13:51:11 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:51:11.475517 | orchestrator | 2025-07-12 13:51:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:14.524497 | orchestrator | 2025-07-12 13:51:14 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:51:14.527070 | orchestrator | 2025-07-12 13:51:14 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:51:14.527123 | orchestrator | 2025-07-12 13:51:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:17.576510 | orchestrator | 2025-07-12 13:51:17 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:51:17.580404 | orchestrator | 2025-07-12 13:51:17 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:51:17.580776 | orchestrator | 2025-07-12 13:51:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:20.636543 | orchestrator | 2025-07-12 13:51:20 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:51:20.638636 | orchestrator | 2025-07-12 13:51:20 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:51:20.638680 | orchestrator | 2025-07-12 13:51:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:23.682296 | orchestrator | 2025-07-12 13:51:23 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:51:23.683783 | orchestrator | 2025-07-12 13:51:23 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:51:23.683814 | orchestrator | 2025-07-12 13:51:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:26.724248 | orchestrator | 2025-07-12 13:51:26 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:51:26.725988 | orchestrator | 2025-07-12 13:51:26 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:51:26.726065 | orchestrator | 2025-07-12 13:51:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:29.766950 | orchestrator | 2025-07-12 13:51:29 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:51:29.769741 | orchestrator | 2025-07-12 13:51:29 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:51:29.770273 | orchestrator | 2025-07-12 13:51:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:32.813758 | orchestrator | 2025-07-12 13:51:32 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:51:32.814778 | orchestrator | 2025-07-12 13:51:32 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:51:32.815063 | orchestrator | 2025-07-12 13:51:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:35.865919 | orchestrator | 2025-07-12 13:51:35 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:51:35.867367 | orchestrator | 2025-07-12 13:51:35 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:51:35.867399 | orchestrator | 2025-07-12 13:51:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:38.918370 | orchestrator | 2025-07-12 13:51:38 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:51:38.918850 | orchestrator | 2025-07-12 13:51:38 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:51:38.918883 | orchestrator | 2025-07-12 13:51:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:41.967971 | orchestrator | 2025-07-12 13:51:41 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:51:41.968507 | orchestrator | 2025-07-12 13:51:41 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:51:41.968539 | orchestrator | 2025-07-12 13:51:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:45.019171 | orchestrator | 2025-07-12 13:51:45 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:51:45.019781 | orchestrator | 2025-07-12 13:51:45 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:51:45.019816 | orchestrator | 2025-07-12 13:51:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:48.057789 | orchestrator | 2025-07-12 13:51:48 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:51:48.059862 | orchestrator | 2025-07-12 13:51:48 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:51:48.060078 | orchestrator | 2025-07-12 13:51:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:51.101703 | orchestrator | 2025-07-12 13:51:51 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:51:51.103387 | orchestrator | 2025-07-12 13:51:51 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state STARTED 2025-07-12 13:51:51.103423 | orchestrator | 2025-07-12 13:51:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:54.145445 | orchestrator | 2025-07-12 13:51:54 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:51:54.157999 | orchestrator | 2025-07-12 13:51:54 | INFO  | Task 2dc9582a-1d30-453e-b75d-caf3a2ab735c is in state SUCCESS 2025-07-12 13:51:54.159995 | orchestrator | 2025-07-12 13:51:54.160072 | orchestrator | 2025-07-12 13:51:54.160086 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 13:51:54.160097 | orchestrator | 2025-07-12 13:51:54.160106 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 13:51:54.160116 | orchestrator | Saturday 12 July 2025 13:45:28 +0000 (0:00:00.285) 0:00:00.285 ********* 2025-07-12 13:51:54.160126 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:51:54.160223 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:51:54.160235 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:51:54.160245 | orchestrator | 2025-07-12 13:51:54.160284 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 13:51:54.160295 | orchestrator | Saturday 12 July 2025 13:45:29 +0000 (0:00:00.446) 0:00:00.732 ********* 2025-07-12 13:51:54.160305 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-07-12 13:51:54.160343 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-07-12 13:51:54.160353 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-07-12 13:51:54.160362 | orchestrator | 2025-07-12 13:51:54.160372 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-07-12 13:51:54.160381 | orchestrator | 2025-07-12 13:51:54.160457 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-07-12 13:51:54.160468 | orchestrator | Saturday 12 July 2025 13:45:29 +0000 (0:00:00.571) 0:00:01.303 ********* 2025-07-12 13:51:54.160477 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:51:54.160487 | orchestrator | 2025-07-12 13:51:54.160496 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-07-12 13:51:54.160506 | orchestrator | Saturday 12 July 2025 13:45:30 +0000 (0:00:00.832) 0:00:02.136 ********* 2025-07-12 13:51:54.160515 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:51:54.160524 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:51:54.160534 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:51:54.160605 | orchestrator | 2025-07-12 13:51:54.160630 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-07-12 13:51:54.160640 | orchestrator | Saturday 12 July 2025 13:45:31 +0000 (0:00:00.766) 0:00:02.902 ********* 2025-07-12 13:51:54.160650 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:51:54.160659 | orchestrator | 2025-07-12 13:51:54.160669 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-07-12 13:51:54.160678 | orchestrator | Saturday 12 July 2025 13:45:32 +0000 (0:00:01.227) 0:00:04.130 ********* 2025-07-12 13:51:54.160688 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:51:54.160697 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:51:54.160706 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:51:54.160716 | orchestrator | 2025-07-12 13:51:54.160725 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-07-12 13:51:54.160734 | orchestrator | Saturday 12 July 2025 13:45:33 +0000 (0:00:00.821) 0:00:04.952 ********* 2025-07-12 13:51:54.160910 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-07-12 13:51:54.160922 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-07-12 13:51:54.160931 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-07-12 13:51:54.160975 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-07-12 13:51:54.160987 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-07-12 13:51:54.160997 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-07-12 13:51:54.161007 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-07-12 13:51:54.161016 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-07-12 13:51:54.161026 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-07-12 13:51:54.161035 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-07-12 13:51:54.161045 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-07-12 13:51:54.161054 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-07-12 13:51:54.161063 | orchestrator | 2025-07-12 13:51:54.161073 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-07-12 13:51:54.161082 | orchestrator | Saturday 12 July 2025 13:45:37 +0000 (0:00:03.946) 0:00:08.898 ********* 2025-07-12 13:51:54.161092 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-07-12 13:51:54.161171 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-07-12 13:51:54.161181 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-07-12 13:51:54.161191 | orchestrator | 2025-07-12 13:51:54.161201 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-07-12 13:51:54.161211 | orchestrator | Saturday 12 July 2025 13:45:38 +0000 (0:00:00.847) 0:00:09.746 ********* 2025-07-12 13:51:54.161220 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-07-12 13:51:54.161230 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-07-12 13:51:54.161239 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-07-12 13:51:54.161248 | orchestrator | 2025-07-12 13:51:54.161258 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-07-12 13:51:54.161267 | orchestrator | Saturday 12 July 2025 13:45:39 +0000 (0:00:01.622) 0:00:11.368 ********* 2025-07-12 13:51:54.161277 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-07-12 13:51:54.161286 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.161311 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-07-12 13:51:54.161321 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.161330 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-07-12 13:51:54.161340 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.161349 | orchestrator | 2025-07-12 13:51:54.161359 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-07-12 13:51:54.161369 | orchestrator | Saturday 12 July 2025 13:45:40 +0000 (0:00:01.111) 0:00:12.480 ********* 2025-07-12 13:51:54.161382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-12 13:51:54.161404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-12 13:51:54.161414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-12 13:51:54.161424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 13:51:54.161441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 13:51:54.161486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 13:51:54.161497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 13:51:54.161532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 13:51:54.161576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 13:51:54.161588 | orchestrator | 2025-07-12 13:51:54.161598 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-07-12 13:51:54.161608 | orchestrator | Saturday 12 July 2025 13:45:44 +0000 (0:00:03.331) 0:00:15.812 ********* 2025-07-12 13:51:54.161617 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.161627 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.161636 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.161645 | orchestrator | 2025-07-12 13:51:54.161743 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-07-12 13:51:54.161754 | orchestrator | Saturday 12 July 2025 13:45:45 +0000 (0:00:01.280) 0:00:17.092 ********* 2025-07-12 13:51:54.161763 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-07-12 13:51:54.161773 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-07-12 13:51:54.161790 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-07-12 13:51:54.161799 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-07-12 13:51:54.161809 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-07-12 13:51:54.161818 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-07-12 13:51:54.161827 | orchestrator | 2025-07-12 13:51:54.161836 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-07-12 13:51:54.161846 | orchestrator | Saturday 12 July 2025 13:45:47 +0000 (0:00:02.234) 0:00:19.326 ********* 2025-07-12 13:51:54.161855 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.161864 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.161874 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.161883 | orchestrator | 2025-07-12 13:51:54.161892 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-07-12 13:51:54.161902 | orchestrator | Saturday 12 July 2025 13:45:49 +0000 (0:00:02.154) 0:00:21.480 ********* 2025-07-12 13:51:54.161911 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:51:54.161920 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:51:54.161929 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:51:54.161939 | orchestrator | 2025-07-12 13:51:54.161948 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-07-12 13:51:54.161957 | orchestrator | Saturday 12 July 2025 13:45:51 +0000 (0:00:01.560) 0:00:23.041 ********* 2025-07-12 13:51:54.161967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 13:51:54.161986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 13:51:54.161997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:51:54.162055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:51:54.162070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:51:54.162213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:51:54.162224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__eb262a2103f21eae5df8196d0d91cf423a8af579', '__omit_place_holder__eb262a2103f21eae5df8196d0d91cf423a8af579'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-12 13:51:54.162234 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.162253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__eb262a2103f21eae5df8196d0d91cf423a8af579', '__omit_place_holder__eb262a2103f21eae5df8196d0d91cf423a8af579'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-12 13:51:54.162263 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.162273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 13:51:54.162289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:51:54.162306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:51:54.162317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__eb262a2103f21eae5df8196d0d91cf423a8af579', '__omit_place_holder__eb262a2103f21eae5df8196d0d91cf423a8af579'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-12 13:51:54.162406 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.162416 | orchestrator | 2025-07-12 13:51:54.162426 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-07-12 13:51:54.162435 | orchestrator | Saturday 12 July 2025 13:45:52 +0000 (0:00:01.462) 0:00:24.504 ********* 2025-07-12 13:51:54.162445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-12 13:51:54.162461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-12 13:51:54.162472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-12 13:51:54.162482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 13:51:54.162502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:51:54.162513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__eb262a2103f21eae5df8196d0d91cf423a8af579', '__omit_place_holder__eb262a2103f21eae5df8196d0d91cf423a8af579'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-12 13:51:54.162523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 13:51:54.162533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:51:54.162604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__eb262a2103f21eae5df8196d0d91cf423a8af579', '__omit_place_holder__eb262a2103f21eae5df8196d0d91cf423a8af579'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-12 13:51:54.162616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 13:51:54.162645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:51:54.162656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__eb262a2103f21eae5df8196d0d91cf423a8af579', '__omit_place_holder__eb262a2103f21eae5df8196d0d91cf423a8af579'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-12 13:51:54.162665 | orchestrator | 2025-07-12 13:51:54.162675 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-07-12 13:51:54.162684 | orchestrator | Saturday 12 July 2025 13:45:57 +0000 (0:00:04.397) 0:00:28.901 ********* 2025-07-12 13:51:54.162694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-12 13:51:54.162704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-12 13:51:54.162723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-12 13:51:54.162733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 13:51:54.162756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 13:51:54.162766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 13:51:54.162776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 13:51:54.162786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 13:51:54.162795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 13:51:54.162835 | orchestrator | 2025-07-12 13:51:54.162847 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-07-12 13:51:54.162922 | orchestrator | Saturday 12 July 2025 13:46:01 +0000 (0:00:04.437) 0:00:33.339 ********* 2025-07-12 13:51:54.162933 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-07-12 13:51:54.165191 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-07-12 13:51:54.165278 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-07-12 13:51:54.165295 | orchestrator | 2025-07-12 13:51:54.165308 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-07-12 13:51:54.165340 | orchestrator | Saturday 12 July 2025 13:46:04 +0000 (0:00:02.921) 0:00:36.260 ********* 2025-07-12 13:51:54.165351 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-07-12 13:51:54.165362 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-07-12 13:51:54.165372 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-07-12 13:51:54.165383 | orchestrator | 2025-07-12 13:51:54.165394 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-07-12 13:51:54.165404 | orchestrator | Saturday 12 July 2025 13:46:10 +0000 (0:00:06.232) 0:00:42.493 ********* 2025-07-12 13:51:54.165415 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.165426 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.165436 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.165447 | orchestrator | 2025-07-12 13:51:54.165457 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-07-12 13:51:54.165468 | orchestrator | Saturday 12 July 2025 13:46:11 +0000 (0:00:00.891) 0:00:43.385 ********* 2025-07-12 13:51:54.165479 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-07-12 13:51:54.165491 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-07-12 13:51:54.165509 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-07-12 13:51:54.165520 | orchestrator | 2025-07-12 13:51:54.165531 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-07-12 13:51:54.165570 | orchestrator | Saturday 12 July 2025 13:46:16 +0000 (0:00:04.564) 0:00:47.949 ********* 2025-07-12 13:51:54.165581 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-07-12 13:51:54.165592 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-07-12 13:51:54.165602 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-07-12 13:51:54.165613 | orchestrator | 2025-07-12 13:51:54.165623 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-07-12 13:51:54.165634 | orchestrator | Saturday 12 July 2025 13:46:18 +0000 (0:00:02.160) 0:00:50.109 ********* 2025-07-12 13:51:54.165644 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-07-12 13:51:54.165655 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-07-12 13:51:54.165665 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-07-12 13:51:54.165676 | orchestrator | 2025-07-12 13:51:54.165686 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-07-12 13:51:54.165697 | orchestrator | Saturday 12 July 2025 13:46:20 +0000 (0:00:01.900) 0:00:52.010 ********* 2025-07-12 13:51:54.165708 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-07-12 13:51:54.165726 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-07-12 13:51:54.165744 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-07-12 13:51:54.165763 | orchestrator | 2025-07-12 13:51:54.165783 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-07-12 13:51:54.165802 | orchestrator | Saturday 12 July 2025 13:46:22 +0000 (0:00:01.782) 0:00:53.792 ********* 2025-07-12 13:51:54.165820 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:51:54.165848 | orchestrator | 2025-07-12 13:51:54.165861 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-07-12 13:51:54.165874 | orchestrator | Saturday 12 July 2025 13:46:22 +0000 (0:00:00.654) 0:00:54.447 ********* 2025-07-12 13:51:54.165909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-12 13:51:54.165945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-12 13:51:54.165959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-12 13:51:54.165977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 13:51:54.165990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 13:51:54.166003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 13:51:54.170275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 13:51:54.170340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 13:51:54.170375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 13:51:54.170389 | orchestrator | 2025-07-12 13:51:54.170402 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-07-12 13:51:54.170413 | orchestrator | Saturday 12 July 2025 13:46:26 +0000 (0:00:03.343) 0:00:57.790 ********* 2025-07-12 13:51:54.170425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 13:51:54.170452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:51:54.170464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:51:54.170476 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.170488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 13:51:54.170514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:51:54.170537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:51:54.170574 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.170585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 13:51:54.170602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:51:54.170614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:51:54.170625 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.170636 | orchestrator | 2025-07-12 13:51:54.170647 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-07-12 13:51:54.170658 | orchestrator | Saturday 12 July 2025 13:46:26 +0000 (0:00:00.539) 0:00:58.330 ********* 2025-07-12 13:51:54.170669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 13:51:54.170689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:51:54.170707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:51:54.170718 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.170730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 13:51:54.170741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:51:54.170757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:51:54.170768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 13:51:54.170786 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.170797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:51:54.170808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:51:54.170819 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.170830 | orchestrator | 2025-07-12 13:51:54.170841 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-07-12 13:51:54.170851 | orchestrator | Saturday 12 July 2025 13:46:27 +0000 (0:00:01.124) 0:00:59.455 ********* 2025-07-12 13:51:54.170870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 13:51:54.170882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:51:54.170893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:51:54.170904 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.170944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 13:51:54.170963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:51:54.170975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:51:54.170986 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.171011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 13:51:54.171022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:51:54.171034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:51:54.171045 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.171055 | orchestrator | 2025-07-12 13:51:54.171071 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-07-12 13:51:54.171082 | orchestrator | Saturday 12 July 2025 13:46:28 +0000 (0:00:00.881) 0:01:00.337 ********* 2025-07-12 13:51:54.171100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 13:51:54.171112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:51:54.171123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:51:54.171134 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.171145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 13:51:54.171178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:51:54.171191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:51:54.171201 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.171217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 13:51:54.171235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:51:54.171247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:51:54.171258 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.171268 | orchestrator | 2025-07-12 13:51:54.171279 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-07-12 13:51:54.171290 | orchestrator | Saturday 12 July 2025 13:46:30 +0000 (0:00:01.344) 0:01:01.681 ********* 2025-07-12 13:51:54.171301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 13:51:54.171321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:51:54.171333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:51:54.171344 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.171366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 13:51:54.171378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:51:54.171389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:51:54.171400 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.171411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 13:51:54.171428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:51:54.171440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:51:54.171451 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.171461 | orchestrator | 2025-07-12 13:51:54.171472 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-07-12 13:51:54.171498 | orchestrator | Saturday 12 July 2025 13:46:31 +0000 (0:00:01.351) 0:01:03.033 ********* 2025-07-12 13:51:54.171513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 13:51:54.171525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:51:54.171537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:51:54.171666 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.171688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 13:51:54.171708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:51:54.171731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:51:54.171742 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.171763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 13:51:54.171780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:51:54.171791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:51:54.171802 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.171813 | orchestrator | 2025-07-12 13:51:54.171824 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-07-12 13:51:54.171834 | orchestrator | Saturday 12 July 2025 13:46:32 +0000 (0:00:00.948) 0:01:03.982 ********* 2025-07-12 13:51:54.171845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 13:51:54.171857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:51:54.171875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:51:54.171894 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.171905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 13:51:54.171921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:51:54.171932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:51:54.171943 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.171954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 13:51:54.171966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:51:54.171977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:51:54.171987 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.172004 | orchestrator | 2025-07-12 13:51:54.172016 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-07-12 13:51:54.172032 | orchestrator | Saturday 12 July 2025 13:46:33 +0000 (0:00:00.700) 0:01:04.683 ********* 2025-07-12 13:51:54.172044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 13:51:54.172055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:51:54.172071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:51:54.172083 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.172094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 13:51:54.172105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:51:54.172116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:51:54.172133 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.172151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 13:51:54.172163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:51:54.172179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:51:54.172191 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.172201 | orchestrator | 2025-07-12 13:51:54.172212 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-07-12 13:51:54.172223 | orchestrator | Saturday 12 July 2025 13:46:34 +0000 (0:00:01.465) 0:01:06.149 ********* 2025-07-12 13:51:54.172234 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-07-12 13:51:54.172245 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-07-12 13:51:54.172255 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-07-12 13:51:54.172305 | orchestrator | 2025-07-12 13:51:54.172317 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-07-12 13:51:54.172328 | orchestrator | Saturday 12 July 2025 13:46:36 +0000 (0:00:01.579) 0:01:07.729 ********* 2025-07-12 13:51:54.172339 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-07-12 13:51:54.172350 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-07-12 13:51:54.172360 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-07-12 13:51:54.172371 | orchestrator | 2025-07-12 13:51:54.172382 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-07-12 13:51:54.172393 | orchestrator | Saturday 12 July 2025 13:46:37 +0000 (0:00:01.502) 0:01:09.231 ********* 2025-07-12 13:51:54.172403 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-07-12 13:51:54.172414 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-07-12 13:51:54.172425 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-07-12 13:51:54.172443 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-12 13:51:54.172454 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.172465 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-12 13:51:54.172475 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.172486 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-12 13:51:54.172497 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.172507 | orchestrator | 2025-07-12 13:51:54.172518 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-07-12 13:51:54.172529 | orchestrator | Saturday 12 July 2025 13:46:39 +0000 (0:00:01.379) 0:01:10.611 ********* 2025-07-12 13:51:54.172576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-12 13:51:54.172601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-12 13:51:54.172630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-12 13:51:54.172649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 13:51:54.172661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 13:51:54.172680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 13:51:54.172692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 13:51:54.172711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 13:51:54.172723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 13:51:54.172734 | orchestrator | 2025-07-12 13:51:54.172745 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-07-12 13:51:54.172756 | orchestrator | Saturday 12 July 2025 13:46:41 +0000 (0:00:02.776) 0:01:13.387 ********* 2025-07-12 13:51:54.172766 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:51:54.172777 | orchestrator | 2025-07-12 13:51:54.172788 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-07-12 13:51:54.172799 | orchestrator | Saturday 12 July 2025 13:46:42 +0000 (0:00:00.839) 0:01:14.227 ********* 2025-07-12 13:51:54.172817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-07-12 13:51:54.172830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-12 13:51:54.172849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.172860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.172879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-07-12 13:51:54.172890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-12 13:51:54.172910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-07-12 13:51:54.172929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.172940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-12 13:51:54.172951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.172968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.172980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.172991 | orchestrator | 2025-07-12 13:51:54.173002 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-07-12 13:51:54.173013 | orchestrator | Saturday 12 July 2025 13:46:46 +0000 (0:00:04.221) 0:01:18.448 ********* 2025-07-12 13:51:54.173029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-07-12 13:51:54.173047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-12 13:51:54.173058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.173069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.173080 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.173098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-07-12 13:51:54.173110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-12 13:51:54.173126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.173144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.173154 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.173166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-07-12 13:51:54.173177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-12 13:51:54.173194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.173205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.173216 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.173227 | orchestrator | 2025-07-12 13:51:54.173238 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-07-12 13:51:54.173248 | orchestrator | Saturday 12 July 2025 13:46:47 +0000 (0:00:00.883) 0:01:19.331 ********* 2025-07-12 13:51:54.173264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-07-12 13:51:54.173282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-07-12 13:51:54.173294 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.173305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-07-12 13:51:54.173316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-07-12 13:51:54.173327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-07-12 13:51:54.173337 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.173348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-07-12 13:51:54.173359 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.173370 | orchestrator | 2025-07-12 13:51:54.173380 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-07-12 13:51:54.173391 | orchestrator | Saturday 12 July 2025 13:46:48 +0000 (0:00:01.125) 0:01:20.457 ********* 2025-07-12 13:51:54.173402 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.173412 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.173423 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.173433 | orchestrator | 2025-07-12 13:51:54.173444 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-07-12 13:51:54.173455 | orchestrator | Saturday 12 July 2025 13:46:50 +0000 (0:00:01.507) 0:01:21.964 ********* 2025-07-12 13:51:54.173465 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.173476 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.173486 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.173497 | orchestrator | 2025-07-12 13:51:54.173508 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-07-12 13:51:54.173518 | orchestrator | Saturday 12 July 2025 13:46:52 +0000 (0:00:02.172) 0:01:24.137 ********* 2025-07-12 13:51:54.173529 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:51:54.173606 | orchestrator | 2025-07-12 13:51:54.173621 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-07-12 13:51:54.173632 | orchestrator | Saturday 12 July 2025 13:46:53 +0000 (0:00:00.618) 0:01:24.755 ********* 2025-07-12 13:51:54.173652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 13:51:54.173666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.173690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 13:51:54.173702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.173713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.173731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 13:51:54.173743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.173761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.173777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.173788 | orchestrator | 2025-07-12 13:51:54.173799 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-07-12 13:51:54.173810 | orchestrator | Saturday 12 July 2025 13:46:58 +0000 (0:00:05.524) 0:01:30.280 ********* 2025-07-12 13:51:54.173822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 13:51:54.173833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.173851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/t2025-07-12 13:51:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:54.174111 | orchestrator | imezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.174137 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.174149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 13:51:54.174167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.174179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.174191 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.174202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 13:51:54.174221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.174242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.174253 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.174264 | orchestrator | 2025-07-12 13:51:54.174275 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-07-12 13:51:54.174286 | orchestrator | Saturday 12 July 2025 13:46:59 +0000 (0:00:00.667) 0:01:30.947 ********* 2025-07-12 13:51:54.174298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-12 13:51:54.174318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-12 13:51:54.174330 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.174341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-12 13:51:54.174352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-12 13:51:54.174363 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.174374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-12 13:51:54.174385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-12 13:51:54.174396 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.174406 | orchestrator | 2025-07-12 13:51:54.174417 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-07-12 13:51:54.174428 | orchestrator | Saturday 12 July 2025 13:47:00 +0000 (0:00:00.882) 0:01:31.829 ********* 2025-07-12 13:51:54.174438 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.174449 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.174460 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.174470 | orchestrator | 2025-07-12 13:51:54.174481 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-07-12 13:51:54.174492 | orchestrator | Saturday 12 July 2025 13:47:02 +0000 (0:00:01.732) 0:01:33.562 ********* 2025-07-12 13:51:54.174502 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.174513 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.174523 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.174534 | orchestrator | 2025-07-12 13:51:54.174574 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-07-12 13:51:54.174585 | orchestrator | Saturday 12 July 2025 13:47:04 +0000 (0:00:02.075) 0:01:35.638 ********* 2025-07-12 13:51:54.174604 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.174615 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.174625 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.174636 | orchestrator | 2025-07-12 13:51:54.174647 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-07-12 13:51:54.174657 | orchestrator | Saturday 12 July 2025 13:47:04 +0000 (0:00:00.340) 0:01:35.978 ********* 2025-07-12 13:51:54.174668 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:51:54.174678 | orchestrator | 2025-07-12 13:51:54.174689 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-07-12 13:51:54.174699 | orchestrator | Saturday 12 July 2025 13:47:05 +0000 (0:00:00.711) 0:01:36.690 ********* 2025-07-12 13:51:54.174717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-07-12 13:51:54.174729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-07-12 13:51:54.174741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-07-12 13:51:54.174752 | orchestrator | 2025-07-12 13:51:54.174763 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-07-12 13:51:54.174774 | orchestrator | Saturday 12 July 2025 13:47:08 +0000 (0:00:03.061) 0:01:39.751 ********* 2025-07-12 13:51:54.174785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-07-12 13:51:54.174803 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.174814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-07-12 13:51:54.174825 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.174864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-07-12 13:51:54.174877 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.174888 | orchestrator | 2025-07-12 13:51:54.174898 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-07-12 13:51:54.174909 | orchestrator | Saturday 12 July 2025 13:47:09 +0000 (0:00:01.496) 0:01:41.248 ********* 2025-07-12 13:51:54.174931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-12 13:51:54.174949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-12 13:51:54.174961 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.174972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-12 13:51:54.174983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-12 13:51:54.175001 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.175012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-12 13:51:54.175023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-12 13:51:54.175034 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.175045 | orchestrator | 2025-07-12 13:51:54.175056 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-07-12 13:51:54.175067 | orchestrator | Saturday 12 July 2025 13:47:11 +0000 (0:00:01.741) 0:01:42.989 ********* 2025-07-12 13:51:54.175077 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.175088 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.175098 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.175109 | orchestrator | 2025-07-12 13:51:54.175119 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-07-12 13:51:54.175130 | orchestrator | Saturday 12 July 2025 13:47:12 +0000 (0:00:00.904) 0:01:43.894 ********* 2025-07-12 13:51:54.175140 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.175151 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.175162 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.175172 | orchestrator | 2025-07-12 13:51:54.175183 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-07-12 13:51:54.175199 | orchestrator | Saturday 12 July 2025 13:47:13 +0000 (0:00:01.013) 0:01:44.907 ********* 2025-07-12 13:51:54.175210 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:51:54.175221 | orchestrator | 2025-07-12 13:51:54.175232 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-07-12 13:51:54.175242 | orchestrator | Saturday 12 July 2025 13:47:14 +0000 (0:00:00.926) 0:01:45.834 ********* 2025-07-12 13:51:54.175253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 13:51:54.175271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.175289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.175301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.175319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 13:51:54.175331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 13:51:54.175347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.175365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.175376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.175387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.175405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.175421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.175439 | orchestrator | 2025-07-12 13:51:54.175451 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-07-12 13:51:54.175461 | orchestrator | Saturday 12 July 2025 13:47:18 +0000 (0:00:04.215) 0:01:50.049 ********* 2025-07-12 13:51:54.175473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 13:51:54.175484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.175495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.175512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.175523 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.175600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 13:51:54.175624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.175636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.175647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.175658 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.175676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 13:51:54.175688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.175714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.175726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.175737 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.175747 | orchestrator | 2025-07-12 13:51:54.175758 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-07-12 13:51:54.175769 | orchestrator | Saturday 12 July 2025 13:47:20 +0000 (0:00:01.742) 0:01:51.791 ********* 2025-07-12 13:51:54.175780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-12 13:51:54.175790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-12 13:51:54.175800 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.175810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-12 13:51:54.175820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-12 13:51:54.175830 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.175845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-12 13:51:54.175855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-12 13:51:54.175870 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.175880 | orchestrator | 2025-07-12 13:51:54.175890 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-07-12 13:51:54.175900 | orchestrator | Saturday 12 July 2025 13:47:21 +0000 (0:00:01.320) 0:01:53.112 ********* 2025-07-12 13:51:54.175909 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.175919 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.175928 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.175938 | orchestrator | 2025-07-12 13:51:54.175947 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-07-12 13:51:54.175957 | orchestrator | Saturday 12 July 2025 13:47:23 +0000 (0:00:01.447) 0:01:54.560 ********* 2025-07-12 13:51:54.175966 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.175976 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.175985 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.175995 | orchestrator | 2025-07-12 13:51:54.176004 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-07-12 13:51:54.176014 | orchestrator | Saturday 12 July 2025 13:47:25 +0000 (0:00:02.169) 0:01:56.730 ********* 2025-07-12 13:51:54.176024 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.176033 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.176043 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.176052 | orchestrator | 2025-07-12 13:51:54.176062 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-07-12 13:51:54.176071 | orchestrator | Saturday 12 July 2025 13:47:26 +0000 (0:00:00.803) 0:01:57.533 ********* 2025-07-12 13:51:54.176085 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.176095 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.176104 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.176114 | orchestrator | 2025-07-12 13:51:54.176123 | orchestrator | TASK [include_role : designate] ************************************************ 2025-07-12 13:51:54.176133 | orchestrator | Saturday 12 July 2025 13:47:26 +0000 (0:00:00.529) 0:01:58.063 ********* 2025-07-12 13:51:54.176142 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:51:54.176152 | orchestrator | 2025-07-12 13:51:54.176161 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-07-12 13:51:54.176171 | orchestrator | Saturday 12 July 2025 13:47:27 +0000 (0:00:00.943) 0:01:59.006 ********* 2025-07-12 13:51:54.176181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 13:51:54.176192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 13:51:54.176213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 13:51:54.176223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.176238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 13:51:54.176248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.176258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.176268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.176283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.176299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.176309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.176324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.176334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.176344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 13:51:54.176360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.176375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 13:51:54.176385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.176400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.176411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.176421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.176431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.176447 | orchestrator | 2025-07-12 13:51:54.176458 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-07-12 13:51:54.176467 | orchestrator | Saturday 12 July 2025 13:47:31 +0000 (0:00:04.007) 0:02:03.014 ********* 2025-07-12 13:51:54.176482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 13:51:54.176493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 13:51:54.176507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.176517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.176527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 13:51:54.176567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.176594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 13:51:54.176612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.176632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.176643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.176653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.176669 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.176680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.176690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.176705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.176715 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.176729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 13:51:54.176740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 13:51:54.176750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.176768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.176779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.176946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.176963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.176973 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.176982 | orchestrator | 2025-07-12 13:51:54.176992 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-07-12 13:51:54.177002 | orchestrator | Saturday 12 July 2025 13:47:32 +0000 (0:00:00.842) 0:02:03.856 ********* 2025-07-12 13:51:54.177012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-07-12 13:51:54.177027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-07-12 13:51:54.177037 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.177047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-07-12 13:51:54.177057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-07-12 13:51:54.177074 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.177083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-07-12 13:51:54.177093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-07-12 13:51:54.177103 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.177112 | orchestrator | 2025-07-12 13:51:54.177122 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-07-12 13:51:54.177132 | orchestrator | Saturday 12 July 2025 13:47:33 +0000 (0:00:01.050) 0:02:04.907 ********* 2025-07-12 13:51:54.177141 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.177151 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.177160 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.177169 | orchestrator | 2025-07-12 13:51:54.177179 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-07-12 13:51:54.177189 | orchestrator | Saturday 12 July 2025 13:47:35 +0000 (0:00:01.887) 0:02:06.794 ********* 2025-07-12 13:51:54.177198 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.177208 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.177217 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.177227 | orchestrator | 2025-07-12 13:51:54.177236 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-07-12 13:51:54.177246 | orchestrator | Saturday 12 July 2025 13:47:37 +0000 (0:00:02.079) 0:02:08.874 ********* 2025-07-12 13:51:54.177255 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.177265 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.177274 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.177283 | orchestrator | 2025-07-12 13:51:54.177293 | orchestrator | TASK [include_role : glance] *************************************************** 2025-07-12 13:51:54.177303 | orchestrator | Saturday 12 July 2025 13:47:37 +0000 (0:00:00.365) 0:02:09.239 ********* 2025-07-12 13:51:54.177312 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:51:54.177322 | orchestrator | 2025-07-12 13:51:54.177331 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-07-12 13:51:54.177340 | orchestrator | Saturday 12 July 2025 13:47:38 +0000 (0:00:00.825) 0:02:10.065 ********* 2025-07-12 13:51:54.177420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 13:51:54.177445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 13:51:54.177569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-12 13:51:54.177595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 13:51:54.177681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-12 13:51:54.177702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-12 13:51:54.177726 | orchestrator | 2025-07-12 13:51:54.177742 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-07-12 13:51:54.177752 | orchestrator | Saturday 12 July 2025 13:47:42 +0000 (0:00:04.335) 0:02:14.401 ********* 2025-07-12 13:51:54.177823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 13:51:54.177844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 13:51:54.177862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-12 13:51:54.177873 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.177962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-12 13:51:54.177986 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.177997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 13:51:54.178114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-12 13:51:54.178139 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.178149 | orchestrator | 2025-07-12 13:51:54.178159 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-07-12 13:51:54.178174 | orchestrator | Saturday 12 July 2025 13:47:45 +0000 (0:00:03.026) 0:02:17.427 ********* 2025-07-12 13:51:54.178184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-12 13:51:54.178195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-12 13:51:54.178205 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.178215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-12 13:51:54.178225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-12 13:51:54.178235 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.178245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-12 13:51:54.178315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-12 13:51:54.178341 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.178350 | orchestrator | 2025-07-12 13:51:54.178360 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-07-12 13:51:54.178370 | orchestrator | Saturday 12 July 2025 13:47:49 +0000 (0:00:03.330) 0:02:20.758 ********* 2025-07-12 13:51:54.178379 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.178389 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.178398 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.178408 | orchestrator | 2025-07-12 13:51:54.178418 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-07-12 13:51:54.178427 | orchestrator | Saturday 12 July 2025 13:47:50 +0000 (0:00:01.530) 0:02:22.289 ********* 2025-07-12 13:51:54.178436 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.178446 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.178455 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.178465 | orchestrator | 2025-07-12 13:51:54.178474 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-07-12 13:51:54.178484 | orchestrator | Saturday 12 July 2025 13:47:52 +0000 (0:00:01.976) 0:02:24.265 ********* 2025-07-12 13:51:54.178493 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.178502 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.178512 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.178521 | orchestrator | 2025-07-12 13:51:54.178536 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-07-12 13:51:54.178573 | orchestrator | Saturday 12 July 2025 13:47:53 +0000 (0:00:00.326) 0:02:24.592 ********* 2025-07-12 13:51:54.178583 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:51:54.178592 | orchestrator | 2025-07-12 13:51:54.178602 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-07-12 13:51:54.178611 | orchestrator | Saturday 12 July 2025 13:47:53 +0000 (0:00:00.822) 0:02:25.414 ********* 2025-07-12 13:51:54.178621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 13:51:54.178633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 13:51:54.178644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 13:51:54.178660 | orchestrator | 2025-07-12 13:51:54.178670 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-07-12 13:51:54.178679 | orchestrator | Saturday 12 July 2025 13:47:57 +0000 (0:00:03.354) 0:02:28.769 ********* 2025-07-12 13:51:54.178751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 13:51:54.178766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 13:51:54.178775 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.178785 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.178794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 13:51:54.178804 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.178814 | orchestrator | 2025-07-12 13:51:54.178823 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-07-12 13:51:54.178833 | orchestrator | Saturday 12 July 2025 13:47:57 +0000 (0:00:00.407) 0:02:29.176 ********* 2025-07-12 13:51:54.178843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-07-12 13:51:54.178853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-07-12 13:51:54.178863 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.178872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-07-12 13:51:54.178882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-07-12 13:51:54.178898 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.178908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-07-12 13:51:54.178917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-07-12 13:51:54.178926 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.178936 | orchestrator | 2025-07-12 13:51:54.178945 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-07-12 13:51:54.178954 | orchestrator | Saturday 12 July 2025 13:47:58 +0000 (0:00:00.629) 0:02:29.806 ********* 2025-07-12 13:51:54.178964 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.178973 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.178982 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.178991 | orchestrator | 2025-07-12 13:51:54.179001 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-07-12 13:51:54.179010 | orchestrator | Saturday 12 July 2025 13:47:59 +0000 (0:00:01.653) 0:02:31.460 ********* 2025-07-12 13:51:54.179020 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.179029 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.179039 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.179048 | orchestrator | 2025-07-12 13:51:54.179134 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-07-12 13:51:54.179149 | orchestrator | Saturday 12 July 2025 13:48:01 +0000 (0:00:01.980) 0:02:33.441 ********* 2025-07-12 13:51:54.179158 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.179168 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.179177 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.179186 | orchestrator | 2025-07-12 13:51:54.179196 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-07-12 13:51:54.179205 | orchestrator | Saturday 12 July 2025 13:48:02 +0000 (0:00:00.315) 0:02:33.756 ********* 2025-07-12 13:51:54.179215 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:51:54.179224 | orchestrator | 2025-07-12 13:51:54.179234 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-07-12 13:51:54.179243 | orchestrator | Saturday 12 July 2025 13:48:03 +0000 (0:00:00.950) 0:02:34.707 ********* 2025-07-12 13:51:54.179260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 13:51:54.179340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 13:51:54.179362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 13:51:54.179380 | orchestrator | 2025-07-12 13:51:54.179390 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-07-12 13:51:54.179399 | orchestrator | Saturday 12 July 2025 13:48:07 +0000 (0:00:03.951) 0:02:38.658 ********* 2025-07-12 13:51:54.179474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 13:51:54.179489 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.179500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 13:51:54.179517 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.179661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 13:51:54.179680 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.179689 | orchestrator | 2025-07-12 13:51:54.179707 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-07-12 13:51:54.179717 | orchestrator | Saturday 12 July 2025 13:48:08 +0000 (0:00:00.893) 0:02:39.551 ********* 2025-07-12 13:51:54.179727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-12 13:51:54.179738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-12 13:51:54.179749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-12 13:51:54.179759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-12 13:51:54.179769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-07-12 13:51:54.179778 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.179788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-12 13:51:54.179798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-12 13:51:54.179863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-12 13:51:54.179877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-12 13:51:54.179887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-12 13:51:54.179902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-12 13:51:54.179918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-12 13:51:54.179928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-12 13:51:54.179938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-07-12 13:51:54.179948 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.179957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-07-12 13:51:54.179967 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.179976 | orchestrator | 2025-07-12 13:51:54.179985 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-07-12 13:51:54.179995 | orchestrator | Saturday 12 July 2025 13:48:08 +0000 (0:00:00.940) 0:02:40.491 ********* 2025-07-12 13:51:54.180004 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.180014 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.180023 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.180032 | orchestrator | 2025-07-12 13:51:54.180042 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-07-12 13:51:54.180051 | orchestrator | Saturday 12 July 2025 13:48:10 +0000 (0:00:01.698) 0:02:42.190 ********* 2025-07-12 13:51:54.180060 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.180070 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.180079 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.180088 | orchestrator | 2025-07-12 13:51:54.180098 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-07-12 13:51:54.180107 | orchestrator | Saturday 12 July 2025 13:48:12 +0000 (0:00:02.033) 0:02:44.224 ********* 2025-07-12 13:51:54.180117 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.180124 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.180132 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.180140 | orchestrator | 2025-07-12 13:51:54.180147 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-07-12 13:51:54.180155 | orchestrator | Saturday 12 July 2025 13:48:13 +0000 (0:00:00.311) 0:02:44.535 ********* 2025-07-12 13:51:54.180163 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.180170 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.180178 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.180185 | orchestrator | 2025-07-12 13:51:54.180193 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-07-12 13:51:54.180200 | orchestrator | Saturday 12 July 2025 13:48:13 +0000 (0:00:00.305) 0:02:44.840 ********* 2025-07-12 13:51:54.180208 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:51:54.180216 | orchestrator | 2025-07-12 13:51:54.180224 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-07-12 13:51:54.180231 | orchestrator | Saturday 12 July 2025 13:48:14 +0000 (0:00:01.139) 0:02:45.980 ********* 2025-07-12 13:51:54.180286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:51:54.180313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:51:54.180322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:51:54.180331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 13:51:54.180340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:51:54.180393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 13:51:54.180414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:51:54.180423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:51:54.180432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 13:51:54.180440 | orchestrator | 2025-07-12 13:51:54.180448 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-07-12 13:51:54.180456 | orchestrator | Saturday 12 July 2025 13:48:17 +0000 (0:00:03.202) 0:02:49.183 ********* 2025-07-12 13:51:54.180464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 13:51:54.180517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:51:54.180558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 13:51:54.180573 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.180592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 13:51:54.180602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:51:54.180610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 13:51:54.180618 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.180680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 13:51:54.180699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:51:54.180712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 13:51:54.180720 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.180728 | orchestrator | 2025-07-12 13:51:54.180736 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-07-12 13:51:54.180743 | orchestrator | Saturday 12 July 2025 13:48:18 +0000 (0:00:00.598) 0:02:49.781 ********* 2025-07-12 13:51:54.180751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-12 13:51:54.180760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-12 13:51:54.180768 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.180776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-12 13:51:54.180784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-12 13:51:54.180792 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.180800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-12 13:51:54.180808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-12 13:51:54.180823 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.180831 | orchestrator | 2025-07-12 13:51:54.180839 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-07-12 13:51:54.180846 | orchestrator | Saturday 12 July 2025 13:48:19 +0000 (0:00:01.068) 0:02:50.849 ********* 2025-07-12 13:51:54.180854 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.180862 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.180870 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.180878 | orchestrator | 2025-07-12 13:51:54.180885 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-07-12 13:51:54.180893 | orchestrator | Saturday 12 July 2025 13:48:20 +0000 (0:00:01.323) 0:02:52.173 ********* 2025-07-12 13:51:54.180901 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.180908 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.180916 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.180923 | orchestrator | 2025-07-12 13:51:54.180931 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-07-12 13:51:54.180986 | orchestrator | Saturday 12 July 2025 13:48:23 +0000 (0:00:02.469) 0:02:54.642 ********* 2025-07-12 13:51:54.180997 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.181004 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.181012 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.181020 | orchestrator | 2025-07-12 13:51:54.181028 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-07-12 13:51:54.181036 | orchestrator | Saturday 12 July 2025 13:48:23 +0000 (0:00:00.317) 0:02:54.960 ********* 2025-07-12 13:51:54.181044 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:51:54.181051 | orchestrator | 2025-07-12 13:51:54.181059 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-07-12 13:51:54.181067 | orchestrator | Saturday 12 July 2025 13:48:24 +0000 (0:00:01.201) 0:02:56.162 ********* 2025-07-12 13:51:54.181079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 13:51:54.181088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.181097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 13:51:54.181111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.181169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 13:51:54.181184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.181193 | orchestrator | 2025-07-12 13:51:54.181201 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-07-12 13:51:54.181209 | orchestrator | Saturday 12 July 2025 13:48:28 +0000 (0:00:03.975) 0:03:00.137 ********* 2025-07-12 13:51:54.181217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 13:51:54.181232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 13:51:54.181295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.181307 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.181315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.181323 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.181335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 13:51:54.181343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.181357 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.181365 | orchestrator | 2025-07-12 13:51:54.181373 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-07-12 13:51:54.181381 | orchestrator | Saturday 12 July 2025 13:48:29 +0000 (0:00:00.758) 0:03:00.896 ********* 2025-07-12 13:51:54.181389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-07-12 13:51:54.181398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-07-12 13:51:54.181405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-07-12 13:51:54.181413 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.181421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-07-12 13:51:54.181429 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.181437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-07-12 13:51:54.181445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-07-12 13:51:54.181500 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.181510 | orchestrator | 2025-07-12 13:51:54.181518 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-07-12 13:51:54.181526 | orchestrator | Saturday 12 July 2025 13:48:31 +0000 (0:00:01.714) 0:03:02.610 ********* 2025-07-12 13:51:54.181534 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.181562 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.181571 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.181578 | orchestrator | 2025-07-12 13:51:54.181586 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-07-12 13:51:54.181594 | orchestrator | Saturday 12 July 2025 13:48:32 +0000 (0:00:01.314) 0:03:03.925 ********* 2025-07-12 13:51:54.181602 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.181610 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.181617 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.181625 | orchestrator | 2025-07-12 13:51:54.181633 | orchestrator | TASK [include_role : manila] *************************************************** 2025-07-12 13:51:54.181641 | orchestrator | Saturday 12 July 2025 13:48:34 +0000 (0:00:02.334) 0:03:06.260 ********* 2025-07-12 13:51:54.181648 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:51:54.181656 | orchestrator | 2025-07-12 13:51:54.181664 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-07-12 13:51:54.181671 | orchestrator | Saturday 12 July 2025 13:48:35 +0000 (0:00:01.094) 0:03:07.354 ********* 2025-07-12 13:51:54.181687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-07-12 13:51:54.181701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.181710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.181718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.181849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-07-12 13:51:54.181864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.181884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-07-12 13:51:54.181893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.181901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.181909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.181964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.182010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.182049 | orchestrator | 2025-07-12 13:51:54.182060 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-07-12 13:51:54.182067 | orchestrator | Saturday 12 July 2025 13:48:39 +0000 (0:00:04.069) 0:03:11.423 ********* 2025-07-12 13:51:54.182080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-07-12 13:51:54.182089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.182097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-07-12 13:51:54.182162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.182173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.182192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.182201 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.182209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.182217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.182225 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.182233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-07-12 13:51:54.182289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.182300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.182319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.182327 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.182335 | orchestrator | 2025-07-12 13:51:54.182344 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-07-12 13:51:54.182351 | orchestrator | Saturday 12 July 2025 13:48:40 +0000 (0:00:00.825) 0:03:12.249 ********* 2025-07-12 13:51:54.182359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-07-12 13:51:54.182368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-07-12 13:51:54.182376 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.182384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-07-12 13:51:54.182392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-07-12 13:51:54.182400 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.182408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-07-12 13:51:54.182416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-07-12 13:51:54.182424 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.182432 | orchestrator | 2025-07-12 13:51:54.182440 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-07-12 13:51:54.182448 | orchestrator | Saturday 12 July 2025 13:48:41 +0000 (0:00:00.872) 0:03:13.121 ********* 2025-07-12 13:51:54.182456 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.182463 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.182471 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.182479 | orchestrator | 2025-07-12 13:51:54.182487 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-07-12 13:51:54.182494 | orchestrator | Saturday 12 July 2025 13:48:43 +0000 (0:00:01.636) 0:03:14.758 ********* 2025-07-12 13:51:54.182502 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.182510 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.182517 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.182525 | orchestrator | 2025-07-12 13:51:54.182533 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-07-12 13:51:54.182596 | orchestrator | Saturday 12 July 2025 13:48:45 +0000 (0:00:02.026) 0:03:16.784 ********* 2025-07-12 13:51:54.182612 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:51:54.182620 | orchestrator | 2025-07-12 13:51:54.182627 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-07-12 13:51:54.182635 | orchestrator | Saturday 12 July 2025 13:48:46 +0000 (0:00:01.098) 0:03:17.883 ********* 2025-07-12 13:51:54.182643 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 13:51:54.182651 | orchestrator | 2025-07-12 13:51:54.182659 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-07-12 13:51:54.182667 | orchestrator | Saturday 12 July 2025 13:48:49 +0000 (0:00:03.005) 0:03:20.888 ********* 2025-07-12 13:51:54.182750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 13:51:54.182764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-12 13:51:54.182771 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.182822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 13:51:54.182838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-12 13:51:54.182845 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.182856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 13:51:54.182864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-12 13:51:54.182875 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.182882 | orchestrator | 2025-07-12 13:51:54.182889 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-07-12 13:51:54.182895 | orchestrator | Saturday 12 July 2025 13:48:51 +0000 (0:00:02.495) 0:03:23.383 ********* 2025-07-12 13:51:54.182945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 13:51:54.182962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-12 13:51:54.182969 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.182977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 13:51:54.183031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-12 13:51:54.183041 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.183052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 13:51:54.183059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-12 13:51:54.183071 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.183078 | orchestrator | 2025-07-12 13:51:54.183085 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-07-12 13:51:54.183091 | orchestrator | Saturday 12 July 2025 13:48:54 +0000 (0:00:02.498) 0:03:25.881 ********* 2025-07-12 13:51:54.183098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-12 13:51:54.183145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-12 13:51:54.183155 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.183161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-12 13:51:54.183172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-12 13:51:54.183179 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.183186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-12 13:51:54.183193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-12 13:51:54.183205 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.183211 | orchestrator | 2025-07-12 13:51:54.183218 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-07-12 13:51:54.183225 | orchestrator | Saturday 12 July 2025 13:48:57 +0000 (0:00:02.750) 0:03:28.632 ********* 2025-07-12 13:51:54.183231 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.183238 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.183244 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.183251 | orchestrator | 2025-07-12 13:51:54.183258 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-07-12 13:51:54.183264 | orchestrator | Saturday 12 July 2025 13:48:59 +0000 (0:00:02.203) 0:03:30.835 ********* 2025-07-12 13:51:54.183270 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.183277 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.183283 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.183290 | orchestrator | 2025-07-12 13:51:54.183296 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-07-12 13:51:54.183303 | orchestrator | Saturday 12 July 2025 13:49:00 +0000 (0:00:01.490) 0:03:32.326 ********* 2025-07-12 13:51:54.183309 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.183316 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.183322 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.183329 | orchestrator | 2025-07-12 13:51:54.183335 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-07-12 13:51:54.183342 | orchestrator | Saturday 12 July 2025 13:49:01 +0000 (0:00:00.315) 0:03:32.641 ********* 2025-07-12 13:51:54.183349 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:51:54.183355 | orchestrator | 2025-07-12 13:51:54.183362 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-07-12 13:51:54.183368 | orchestrator | Saturday 12 July 2025 13:49:02 +0000 (0:00:01.178) 0:03:33.820 ********* 2025-07-12 13:51:54.183419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-07-12 13:51:54.183429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-07-12 13:51:54.183440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-07-12 13:51:54.183452 | orchestrator | 2025-07-12 13:51:54.183459 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-07-12 13:51:54.183465 | orchestrator | Saturday 12 July 2025 13:49:04 +0000 (0:00:02.134) 0:03:35.954 ********* 2025-07-12 13:51:54.183472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-07-12 13:51:54.183479 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.183486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-07-12 13:51:54.183492 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.183555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-07-12 13:51:54.183566 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.183573 | orchestrator | 2025-07-12 13:51:54.183579 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-07-12 13:51:54.183586 | orchestrator | Saturday 12 July 2025 13:49:04 +0000 (0:00:00.395) 0:03:36.350 ********* 2025-07-12 13:51:54.183593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-07-12 13:51:54.183602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-07-12 13:51:54.183622 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.183638 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.183646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-07-12 13:51:54.183653 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.183659 | orchestrator | 2025-07-12 13:51:54.183666 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-07-12 13:51:54.183673 | orchestrator | Saturday 12 July 2025 13:49:05 +0000 (0:00:00.590) 0:03:36.940 ********* 2025-07-12 13:51:54.183679 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.183686 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.183692 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.183699 | orchestrator | 2025-07-12 13:51:54.183705 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-07-12 13:51:54.183712 | orchestrator | Saturday 12 July 2025 13:49:06 +0000 (0:00:00.802) 0:03:37.743 ********* 2025-07-12 13:51:54.183718 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.183725 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.183732 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.183738 | orchestrator | 2025-07-12 13:51:54.183745 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-07-12 13:51:54.183752 | orchestrator | Saturday 12 July 2025 13:49:07 +0000 (0:00:01.339) 0:03:39.082 ********* 2025-07-12 13:51:54.183758 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.183765 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.183771 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.183778 | orchestrator | 2025-07-12 13:51:54.183784 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-07-12 13:51:54.183791 | orchestrator | Saturday 12 July 2025 13:49:07 +0000 (0:00:00.308) 0:03:39.391 ********* 2025-07-12 13:51:54.183798 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:51:54.183804 | orchestrator | 2025-07-12 13:51:54.183811 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-07-12 13:51:54.183817 | orchestrator | Saturday 12 July 2025 13:49:09 +0000 (0:00:01.451) 0:03:40.842 ********* 2025-07-12 13:51:54.183825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 13:51:54.183878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.183894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.183902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.183924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-12 13:51:54.183932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.183939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:51:54.183988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:51:54.184004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.184015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 13:51:54.184022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 13:51:54.184029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.184076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.184090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.184104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-12 13:51:54.184111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.184118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:51:54.184125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-12 13:51:54.184132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.184190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.184205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-12 13:51:54.184213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:51:54.184220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-12 13:51:54.184227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:51:54.184234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.184288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.184302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 13:51:54.184309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.184316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-12 13:51:54.184323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:51:54.184330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.184383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-12 13:51:54.184393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-12 13:51:54.184405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.184412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 13:51:54.184419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.184470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.184480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.184490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-12 13:51:54.184497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.184504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:51:54.184511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:51:54.184680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.184696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 13:51:54.184709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.184716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-12 13:51:54.184723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:51:54.184734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.184807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-12 13:51:54.184818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-12 13:51:54.184828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.184835 | orchestrator | 2025-07-12 13:51:54.184841 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-07-12 13:51:54.184848 | orchestrator | Saturday 12 July 2025 13:49:13 +0000 (0:00:04.358) 0:03:45.201 ********* 2025-07-12 13:51:54.184854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 13:51:54.184861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.184912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.184921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.184931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-12 13:51:54.184938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.184945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:51:54.184959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:51:54.184965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.185011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 13:51:54.185024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 13:51:54.185031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.185037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.185049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.185099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-12 13:51:54.185108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.185119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:51:54.185126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-12 13:51:54.185137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 13:51:54.185143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.185190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.185199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.185210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-12 13:51:54.185221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:51:54.185228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.185235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:51:54.185281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-12 13:51:54.185290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.185300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.185307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.185318 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.185325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-12 13:51:54.185371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 13:51:54.185380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.185390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.185397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:51:54.185408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-12 13:51:54.185414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:51:54.185421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:51:54.185470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.185479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.185489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 13:51:54.185500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.185507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-12 13:51:54.185514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-12 13:51:54.185536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-12 13:51:54.185565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:51:54.185575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.185587 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.185593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.185600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-12 13:51:54.185607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-12 13:51:54.185630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.185637 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.185643 | orchestrator | 2025-07-12 13:51:54.185649 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-07-12 13:51:54.185656 | orchestrator | Saturday 12 July 2025 13:49:15 +0000 (0:00:01.504) 0:03:46.706 ********* 2025-07-12 13:51:54.185662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-07-12 13:51:54.185668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-07-12 13:51:54.185681 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.185690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-07-12 13:51:54.185696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-07-12 13:51:54.185703 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.185709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-07-12 13:51:54.185715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-07-12 13:51:54.185721 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.185727 | orchestrator | 2025-07-12 13:51:54.185733 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-07-12 13:51:54.185739 | orchestrator | Saturday 12 July 2025 13:49:17 +0000 (0:00:02.036) 0:03:48.743 ********* 2025-07-12 13:51:54.185745 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.185751 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.185757 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.185764 | orchestrator | 2025-07-12 13:51:54.185770 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-07-12 13:51:54.185776 | orchestrator | Saturday 12 July 2025 13:49:18 +0000 (0:00:01.209) 0:03:49.952 ********* 2025-07-12 13:51:54.185782 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.185788 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.185794 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.185800 | orchestrator | 2025-07-12 13:51:54.185806 | orchestrator | TASK [include_role : placement] ************************************************ 2025-07-12 13:51:54.185812 | orchestrator | Saturday 12 July 2025 13:49:20 +0000 (0:00:02.053) 0:03:52.005 ********* 2025-07-12 13:51:54.185818 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:51:54.185824 | orchestrator | 2025-07-12 13:51:54.185830 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-07-12 13:51:54.185836 | orchestrator | Saturday 12 July 2025 13:49:21 +0000 (0:00:01.190) 0:03:53.196 ********* 2025-07-12 13:51:54.185843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 13:51:54.185865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 13:51:54.185880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 13:51:54.185887 | orchestrator | 2025-07-12 13:51:54.185893 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-07-12 13:51:54.185899 | orchestrator | Saturday 12 July 2025 13:49:24 +0000 (0:00:03.275) 0:03:56.471 ********* 2025-07-12 13:51:54.185906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 13:51:54.185912 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.185919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 13:51:54.185925 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.185946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 13:51:54.185957 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.185964 | orchestrator | 2025-07-12 13:51:54.185970 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-07-12 13:51:54.185976 | orchestrator | Saturday 12 July 2025 13:49:25 +0000 (0:00:00.514) 0:03:56.986 ********* 2025-07-12 13:51:54.185982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-12 13:51:54.185992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-12 13:51:54.185999 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.186005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-12 13:51:54.186011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-12 13:51:54.186038 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.186045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-12 13:51:54.186053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-12 13:51:54.186059 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.186065 | orchestrator | 2025-07-12 13:51:54.186071 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-07-12 13:51:54.186077 | orchestrator | Saturday 12 July 2025 13:49:26 +0000 (0:00:00.758) 0:03:57.744 ********* 2025-07-12 13:51:54.186083 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.186089 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.186095 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.186103 | orchestrator | 2025-07-12 13:51:54.186110 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-07-12 13:51:54.186116 | orchestrator | Saturday 12 July 2025 13:49:27 +0000 (0:00:01.600) 0:03:59.345 ********* 2025-07-12 13:51:54.186123 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.186130 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.186136 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.186143 | orchestrator | 2025-07-12 13:51:54.186150 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-07-12 13:51:54.186156 | orchestrator | Saturday 12 July 2025 13:49:29 +0000 (0:00:01.963) 0:04:01.308 ********* 2025-07-12 13:51:54.186163 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:51:54.186170 | orchestrator | 2025-07-12 13:51:54.186177 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-07-12 13:51:54.186184 | orchestrator | Saturday 12 July 2025 13:49:31 +0000 (0:00:01.246) 0:04:02.555 ********* 2025-07-12 13:51:54.186213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 13:51:54.186222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.186233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.186241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 13:51:54.186249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.186261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.186284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 13:51:54.186296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.186302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.186309 | orchestrator | 2025-07-12 13:51:54.186315 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-07-12 13:51:54.186322 | orchestrator | Saturday 12 July 2025 13:49:35 +0000 (0:00:04.492) 0:04:07.048 ********* 2025-07-12 13:51:54.186328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 13:51:54.186355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.186362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.186369 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.186379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 13:51:54.186386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.186397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.186404 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.186425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 13:51:54.186436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.186443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.186449 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.186455 | orchestrator | 2025-07-12 13:51:54.186462 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-07-12 13:51:54.186468 | orchestrator | Saturday 12 July 2025 13:49:36 +0000 (0:00:00.994) 0:04:08.042 ********* 2025-07-12 13:51:54.186474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-12 13:51:54.186481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-12 13:51:54.186492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-12 13:51:54.186498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-12 13:51:54.186505 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.186511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-12 13:51:54.186517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-12 13:51:54.186523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-12 13:51:54.186531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-12 13:51:54.186557 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.186579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-12 13:51:54.186586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-12 13:51:54.186593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-12 13:51:54.186599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-12 13:51:54.186605 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.186611 | orchestrator | 2025-07-12 13:51:54.186617 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-07-12 13:51:54.186623 | orchestrator | Saturday 12 July 2025 13:49:37 +0000 (0:00:00.844) 0:04:08.886 ********* 2025-07-12 13:51:54.186630 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.186636 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.186642 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.186648 | orchestrator | 2025-07-12 13:51:54.186654 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-07-12 13:51:54.186662 | orchestrator | Saturday 12 July 2025 13:49:39 +0000 (0:00:01.683) 0:04:10.570 ********* 2025-07-12 13:51:54.186671 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.186683 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.186689 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.186695 | orchestrator | 2025-07-12 13:51:54.186702 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-07-12 13:51:54.186708 | orchestrator | Saturday 12 July 2025 13:49:41 +0000 (0:00:02.107) 0:04:12.678 ********* 2025-07-12 13:51:54.186714 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:51:54.186725 | orchestrator | 2025-07-12 13:51:54.186731 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-07-12 13:51:54.186737 | orchestrator | Saturday 12 July 2025 13:49:42 +0000 (0:00:01.538) 0:04:14.217 ********* 2025-07-12 13:51:54.186743 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-07-12 13:51:54.186749 | orchestrator | 2025-07-12 13:51:54.186755 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-07-12 13:51:54.186761 | orchestrator | Saturday 12 July 2025 13:49:43 +0000 (0:00:01.038) 0:04:15.255 ********* 2025-07-12 13:51:54.186768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-07-12 13:51:54.186775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-07-12 13:51:54.186781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-07-12 13:51:54.186787 | orchestrator | 2025-07-12 13:51:54.186794 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-07-12 13:51:54.186800 | orchestrator | Saturday 12 July 2025 13:49:47 +0000 (0:00:03.754) 0:04:19.010 ********* 2025-07-12 13:51:54.186822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 13:51:54.186829 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.186835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 13:51:54.186842 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.186851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 13:51:54.186862 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.186868 | orchestrator | 2025-07-12 13:51:54.186874 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-07-12 13:51:54.186881 | orchestrator | Saturday 12 July 2025 13:49:48 +0000 (0:00:01.409) 0:04:20.419 ********* 2025-07-12 13:51:54.186887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-12 13:51:54.186893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-12 13:51:54.186900 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.186906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-12 13:51:54.186912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-12 13:51:54.186919 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.186925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-12 13:51:54.186931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-12 13:51:54.186937 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.186943 | orchestrator | 2025-07-12 13:51:54.186950 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-07-12 13:51:54.186956 | orchestrator | Saturday 12 July 2025 13:49:50 +0000 (0:00:01.863) 0:04:22.283 ********* 2025-07-12 13:51:54.186962 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.186968 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.186974 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.186980 | orchestrator | 2025-07-12 13:51:54.186986 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-07-12 13:51:54.186992 | orchestrator | Saturday 12 July 2025 13:49:53 +0000 (0:00:02.324) 0:04:24.607 ********* 2025-07-12 13:51:54.186998 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.187004 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.187010 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.187016 | orchestrator | 2025-07-12 13:51:54.187066 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-07-12 13:51:54.187072 | orchestrator | Saturday 12 July 2025 13:49:56 +0000 (0:00:03.019) 0:04:27.627 ********* 2025-07-12 13:51:54.187079 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-07-12 13:51:54.187085 | orchestrator | 2025-07-12 13:51:54.187091 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-07-12 13:51:54.187114 | orchestrator | Saturday 12 July 2025 13:49:56 +0000 (0:00:00.855) 0:04:28.483 ********* 2025-07-12 13:51:54.187128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 13:51:54.187135 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.187141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 13:51:54.187147 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.187157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 13:51:54.187163 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.187170 | orchestrator | 2025-07-12 13:51:54.187176 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-07-12 13:51:54.187182 | orchestrator | Saturday 12 July 2025 13:49:58 +0000 (0:00:01.360) 0:04:29.844 ********* 2025-07-12 13:51:54.187188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 13:51:54.187195 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.187201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 13:51:54.187207 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.187213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 13:51:54.187220 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.187226 | orchestrator | 2025-07-12 13:51:54.187232 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-07-12 13:51:54.187243 | orchestrator | Saturday 12 July 2025 13:50:00 +0000 (0:00:01.835) 0:04:31.679 ********* 2025-07-12 13:51:54.187249 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.187255 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.187261 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.187267 | orchestrator | 2025-07-12 13:51:54.187273 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-07-12 13:51:54.187293 | orchestrator | Saturday 12 July 2025 13:50:01 +0000 (0:00:01.328) 0:04:33.008 ********* 2025-07-12 13:51:54.187300 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:51:54.187307 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:51:54.187313 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:51:54.187319 | orchestrator | 2025-07-12 13:51:54.187325 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-07-12 13:51:54.187331 | orchestrator | Saturday 12 July 2025 13:50:04 +0000 (0:00:02.536) 0:04:35.545 ********* 2025-07-12 13:51:54.187337 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:51:54.187343 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:51:54.187349 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:51:54.187355 | orchestrator | 2025-07-12 13:51:54.187361 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-07-12 13:51:54.187367 | orchestrator | Saturday 12 July 2025 13:50:07 +0000 (0:00:03.212) 0:04:38.758 ********* 2025-07-12 13:51:54.187374 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-07-12 13:51:54.187380 | orchestrator | 2025-07-12 13:51:54.187386 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-07-12 13:51:54.187392 | orchestrator | Saturday 12 July 2025 13:50:08 +0000 (0:00:01.129) 0:04:39.887 ********* 2025-07-12 13:51:54.187402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-12 13:51:54.187408 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.187414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-12 13:51:54.187421 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.187427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-12 13:51:54.187433 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.187440 | orchestrator | 2025-07-12 13:51:54.187446 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-07-12 13:51:54.187452 | orchestrator | Saturday 12 July 2025 13:50:09 +0000 (0:00:01.049) 0:04:40.937 ********* 2025-07-12 13:51:54.187464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-12 13:51:54.187470 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.187477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-12 13:51:54.187483 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.187505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-12 13:51:54.187512 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.187518 | orchestrator | 2025-07-12 13:51:54.187524 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-07-12 13:51:54.187530 | orchestrator | Saturday 12 July 2025 13:50:10 +0000 (0:00:01.406) 0:04:42.343 ********* 2025-07-12 13:51:54.187536 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.187588 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.187595 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.187601 | orchestrator | 2025-07-12 13:51:54.187607 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-07-12 13:51:54.187613 | orchestrator | Saturday 12 July 2025 13:50:12 +0000 (0:00:01.892) 0:04:44.235 ********* 2025-07-12 13:51:54.187619 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:51:54.187625 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:51:54.187631 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:51:54.187637 | orchestrator | 2025-07-12 13:51:54.187643 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-07-12 13:51:54.187650 | orchestrator | Saturday 12 July 2025 13:50:15 +0000 (0:00:02.379) 0:04:46.615 ********* 2025-07-12 13:51:54.187656 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:51:54.187662 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:51:54.187668 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:51:54.187674 | orchestrator | 2025-07-12 13:51:54.187680 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-07-12 13:51:54.187687 | orchestrator | Saturday 12 July 2025 13:50:18 +0000 (0:00:03.122) 0:04:49.737 ********* 2025-07-12 13:51:54.187746 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:51:54.187761 | orchestrator | 2025-07-12 13:51:54.187767 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-07-12 13:51:54.187773 | orchestrator | Saturday 12 July 2025 13:50:19 +0000 (0:00:01.314) 0:04:51.052 ********* 2025-07-12 13:51:54.187780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 13:51:54.187793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 13:51:54.187799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 13:51:54.187827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 13:51:54.187834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.187844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 13:51:54.187855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 13:51:54.187862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 13:51:54.187868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 13:51:54.187889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.187896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 13:51:54.187905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 13:51:54.187917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 13:51:54.187924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 13:51:54.187930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.187936 | orchestrator | 2025-07-12 13:51:54.187943 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-07-12 13:51:54.187949 | orchestrator | Saturday 12 July 2025 13:50:23 +0000 (0:00:03.663) 0:04:54.715 ********* 2025-07-12 13:51:54.187970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 13:51:54.187977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 13:51:54.187987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 13:51:54.187998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 13:51:54.188004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.188010 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.188017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 13:51:54.188037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 13:51:54.188044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 13:51:54.188054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 13:51:54.188065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.188071 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.188078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 13:51:54.188084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 13:51:54.188105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 13:51:54.188112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 13:51:54.188120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:51:54.188129 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.188135 | orchestrator | 2025-07-12 13:51:54.188140 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-07-12 13:51:54.188146 | orchestrator | Saturday 12 July 2025 13:50:23 +0000 (0:00:00.712) 0:04:55.427 ********* 2025-07-12 13:51:54.188151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-12 13:51:54.188157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-12 13:51:54.188162 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.188168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-12 13:51:54.188173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-12 13:51:54.188179 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.188184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-12 13:51:54.188190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-12 13:51:54.188195 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.188201 | orchestrator | 2025-07-12 13:51:54.188206 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-07-12 13:51:54.188211 | orchestrator | Saturday 12 July 2025 13:50:24 +0000 (0:00:00.893) 0:04:56.320 ********* 2025-07-12 13:51:54.188217 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.188222 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.188227 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.188232 | orchestrator | 2025-07-12 13:51:54.188238 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-07-12 13:51:54.188243 | orchestrator | Saturday 12 July 2025 13:50:26 +0000 (0:00:01.762) 0:04:58.083 ********* 2025-07-12 13:51:54.188248 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.188253 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.188259 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.188264 | orchestrator | 2025-07-12 13:51:54.188269 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-07-12 13:51:54.188275 | orchestrator | Saturday 12 July 2025 13:50:28 +0000 (0:00:02.075) 0:05:00.159 ********* 2025-07-12 13:51:54.188280 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:51:54.188285 | orchestrator | 2025-07-12 13:51:54.188290 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-07-12 13:51:54.188296 | orchestrator | Saturday 12 July 2025 13:50:30 +0000 (0:00:01.383) 0:05:01.542 ********* 2025-07-12 13:51:54.188318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 13:51:54.188328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 13:51:54.188334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 13:51:54.188340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 13:51:54.188359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 13:51:54.188375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 13:51:54.188381 | orchestrator | 2025-07-12 13:51:54.188387 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-07-12 13:51:54.188392 | orchestrator | Saturday 12 July 2025 13:50:35 +0000 (0:00:05.596) 0:05:07.138 ********* 2025-07-12 13:51:54.188398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 13:51:54.188404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 13:51:54.188413 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.188432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 13:51:54.188442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 13:51:54.188448 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.188453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 13:51:54.188459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 13:51:54.188469 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.188475 | orchestrator | 2025-07-12 13:51:54.188480 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-07-12 13:51:54.188485 | orchestrator | Saturday 12 July 2025 13:50:36 +0000 (0:00:00.967) 0:05:08.105 ********* 2025-07-12 13:51:54.188491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-07-12 13:51:54.188509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-12 13:51:54.188515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-12 13:51:54.188521 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.188527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-07-12 13:51:54.188532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-12 13:51:54.188552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-12 13:51:54.188562 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.188570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-07-12 13:51:54.188583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-12 13:51:54.188593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-12 13:51:54.188602 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.188609 | orchestrator | 2025-07-12 13:51:54.188614 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-07-12 13:51:54.188620 | orchestrator | Saturday 12 July 2025 13:50:37 +0000 (0:00:00.922) 0:05:09.027 ********* 2025-07-12 13:51:54.188625 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.188630 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.188635 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.188641 | orchestrator | 2025-07-12 13:51:54.188646 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-07-12 13:51:54.188651 | orchestrator | Saturday 12 July 2025 13:50:37 +0000 (0:00:00.444) 0:05:09.472 ********* 2025-07-12 13:51:54.188656 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.188662 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.188667 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.188672 | orchestrator | 2025-07-12 13:51:54.188677 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-07-12 13:51:54.188683 | orchestrator | Saturday 12 July 2025 13:50:39 +0000 (0:00:01.411) 0:05:10.884 ********* 2025-07-12 13:51:54.188688 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:51:54.188698 | orchestrator | 2025-07-12 13:51:54.188703 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-07-12 13:51:54.188709 | orchestrator | Saturday 12 July 2025 13:50:41 +0000 (0:00:01.743) 0:05:12.627 ********* 2025-07-12 13:51:54.188714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 13:51:54.188721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 13:51:54.188743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 13:51:54.188753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:51:54.188758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:51:54.188764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 13:51:54.188774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:51:54.188779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 13:51:54.188799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 13:51:54.188805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:51:54.188811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 13:51:54.188819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 13:51:54.188825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:51:54.188834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:51:54.188840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 13:51:54.188849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 13:51:54.188855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-12 13:51:54.188864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:51:54.188869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 13:51:54.188879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:51:54.188885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-12 13:51:54.188895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 13:51:54.188900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:51:54.188909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 13:51:54.188921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:51:54.188927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 13:51:54.188933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-12 13:51:54.188941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:51:54.188948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:51:54.188958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 13:51:54.188963 | orchestrator | 2025-07-12 13:51:54.188969 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-07-12 13:51:54.188977 | orchestrator | Saturday 12 July 2025 13:50:45 +0000 (0:00:04.214) 0:05:16.841 ********* 2025-07-12 13:51:54.188986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-12 13:51:54.188992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 13:51:54.188998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:51:54.189004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:51:54.189012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 13:51:54.189021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-12 13:51:54.189031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-12 13:51:54.189037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:51:54.189043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:51:54.189048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 13:51:54.189054 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.189062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-12 13:51:54.189068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 13:51:54.189077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:51:54.189086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:51:54.189091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 13:51:54.189097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-12 13:51:54.189106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-12 13:51:54.189112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-12 13:51:54.189124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 13:51:54.189130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:51:54.189135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:51:54.189141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:51:54.189146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:51:54.189155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 13:51:54.189161 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.189166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 13:51:54.189178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-12 13:51:54.189184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-12 13:51:54.189190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:51:54.189195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:51:54.189204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 13:51:54.189209 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.189215 | orchestrator | 2025-07-12 13:51:54.189220 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-07-12 13:51:54.189225 | orchestrator | Saturday 12 July 2025 13:50:46 +0000 (0:00:01.265) 0:05:18.106 ********* 2025-07-12 13:51:54.189231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-07-12 13:51:54.189240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-07-12 13:51:54.189246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-12 13:51:54.189255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-12 13:51:54.189260 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.189266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-07-12 13:51:54.189271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-07-12 13:51:54.189277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-12 13:51:54.189283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-12 13:51:54.189288 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.189294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-07-12 13:51:54.189299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-07-12 13:51:54.189305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-12 13:51:54.189310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-12 13:51:54.189315 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.189321 | orchestrator | 2025-07-12 13:51:54.189326 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-07-12 13:51:54.189332 | orchestrator | Saturday 12 July 2025 13:50:47 +0000 (0:00:00.989) 0:05:19.096 ********* 2025-07-12 13:51:54.189337 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.189342 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.189348 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.189353 | orchestrator | 2025-07-12 13:51:54.189358 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-07-12 13:51:54.189367 | orchestrator | Saturday 12 July 2025 13:50:48 +0000 (0:00:00.424) 0:05:19.521 ********* 2025-07-12 13:51:54.189375 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.189381 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.189386 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.189391 | orchestrator | 2025-07-12 13:51:54.189397 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-07-12 13:51:54.189402 | orchestrator | Saturday 12 July 2025 13:50:49 +0000 (0:00:01.412) 0:05:20.933 ********* 2025-07-12 13:51:54.189407 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:51:54.189413 | orchestrator | 2025-07-12 13:51:54.189418 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-07-12 13:51:54.189423 | orchestrator | Saturday 12 July 2025 13:50:51 +0000 (0:00:01.758) 0:05:22.691 ********* 2025-07-12 13:51:54.189431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 13:51:54.189438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 13:51:54.189444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 13:51:54.189450 | orchestrator | 2025-07-12 13:51:54.189460 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-07-12 13:51:54.189465 | orchestrator | Saturday 12 July 2025 13:50:53 +0000 (0:00:02.449) 0:05:25.141 ********* 2025-07-12 13:51:54.189473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-07-12 13:51:54.189479 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.189489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-07-12 13:51:54.189495 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.189501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-07-12 13:51:54.189507 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.189512 | orchestrator | 2025-07-12 13:51:54.189517 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-07-12 13:51:54.189522 | orchestrator | Saturday 12 July 2025 13:50:54 +0000 (0:00:00.400) 0:05:25.542 ********* 2025-07-12 13:51:54.189528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-07-12 13:51:54.189533 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.189553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-07-12 13:51:54.189563 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.189569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-07-12 13:51:54.189574 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.189580 | orchestrator | 2025-07-12 13:51:54.189585 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-07-12 13:51:54.189590 | orchestrator | Saturday 12 July 2025 13:50:55 +0000 (0:00:01.017) 0:05:26.560 ********* 2025-07-12 13:51:54.189596 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.189601 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.189606 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.189611 | orchestrator | 2025-07-12 13:51:54.189617 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-07-12 13:51:54.189622 | orchestrator | Saturday 12 July 2025 13:50:55 +0000 (0:00:00.436) 0:05:26.997 ********* 2025-07-12 13:51:54.189627 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.189633 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.189638 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.189643 | orchestrator | 2025-07-12 13:51:54.189649 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-07-12 13:51:54.189657 | orchestrator | Saturday 12 July 2025 13:50:56 +0000 (0:00:01.343) 0:05:28.341 ********* 2025-07-12 13:51:54.189663 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:51:54.189668 | orchestrator | 2025-07-12 13:51:54.189673 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-07-12 13:51:54.189679 | orchestrator | Saturday 12 July 2025 13:50:58 +0000 (0:00:01.765) 0:05:30.106 ********* 2025-07-12 13:51:54.189684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-07-12 13:51:54.189693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-07-12 13:51:54.189699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-07-12 13:51:54.189709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-07-12 13:51:54.189718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-07-12 13:51:54.189727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-07-12 13:51:54.189732 | orchestrator | 2025-07-12 13:51:54.189738 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-07-12 13:51:54.189743 | orchestrator | Saturday 12 July 2025 13:51:05 +0000 (0:00:06.529) 0:05:36.636 ********* 2025-07-12 13:51:54.189749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-07-12 13:51:54.189758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-07-12 13:51:54.189764 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.189772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-07-12 13:51:54.189781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-07-12 13:51:54.189787 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.189793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-07-12 13:51:54.189802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-07-12 13:51:54.189808 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.189813 | orchestrator | 2025-07-12 13:51:54.189818 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-07-12 13:51:54.189824 | orchestrator | Saturday 12 July 2025 13:51:05 +0000 (0:00:00.637) 0:05:37.273 ********* 2025-07-12 13:51:54.189829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-12 13:51:54.189837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-12 13:51:54.189843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-12 13:51:54.189848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-12 13:51:54.189854 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.189859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-12 13:51:54.189864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-12 13:51:54.189873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-12 13:51:54.189878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-12 13:51:54.189884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-12 13:51:54.189893 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.189898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-12 13:51:54.189904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-12 13:51:54.189909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-12 13:51:54.189915 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.189920 | orchestrator | 2025-07-12 13:51:54.189926 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-07-12 13:51:54.189931 | orchestrator | Saturday 12 July 2025 13:51:07 +0000 (0:00:01.629) 0:05:38.903 ********* 2025-07-12 13:51:54.189936 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.189942 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.189947 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.189952 | orchestrator | 2025-07-12 13:51:54.189957 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-07-12 13:51:54.189963 | orchestrator | Saturday 12 July 2025 13:51:08 +0000 (0:00:01.369) 0:05:40.272 ********* 2025-07-12 13:51:54.189968 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.189973 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.189979 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.189984 | orchestrator | 2025-07-12 13:51:54.189989 | orchestrator | TASK [include_role : swift] **************************************************** 2025-07-12 13:51:54.189994 | orchestrator | Saturday 12 July 2025 13:51:10 +0000 (0:00:02.103) 0:05:42.376 ********* 2025-07-12 13:51:54.190000 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.190005 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.190010 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.190036 | orchestrator | 2025-07-12 13:51:54.190042 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-07-12 13:51:54.190049 | orchestrator | Saturday 12 July 2025 13:51:11 +0000 (0:00:00.321) 0:05:42.698 ********* 2025-07-12 13:51:54.190055 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.190060 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.190065 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.190071 | orchestrator | 2025-07-12 13:51:54.190076 | orchestrator | TASK [include_role : trove] **************************************************** 2025-07-12 13:51:54.190081 | orchestrator | Saturday 12 July 2025 13:51:11 +0000 (0:00:00.670) 0:05:43.368 ********* 2025-07-12 13:51:54.190087 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.190092 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.190097 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.190103 | orchestrator | 2025-07-12 13:51:54.190108 | orchestrator | TASK [include_role : venus] **************************************************** 2025-07-12 13:51:54.190113 | orchestrator | Saturday 12 July 2025 13:51:12 +0000 (0:00:00.330) 0:05:43.698 ********* 2025-07-12 13:51:54.190121 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.190127 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.190132 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.190137 | orchestrator | 2025-07-12 13:51:54.190143 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-07-12 13:51:54.190148 | orchestrator | Saturday 12 July 2025 13:51:12 +0000 (0:00:00.297) 0:05:43.996 ********* 2025-07-12 13:51:54.190157 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.190163 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.190168 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.190173 | orchestrator | 2025-07-12 13:51:54.190179 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-07-12 13:51:54.190184 | orchestrator | Saturday 12 July 2025 13:51:12 +0000 (0:00:00.343) 0:05:44.340 ********* 2025-07-12 13:51:54.190190 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.190195 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.190200 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.190205 | orchestrator | 2025-07-12 13:51:54.190211 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-07-12 13:51:54.190216 | orchestrator | Saturday 12 July 2025 13:51:13 +0000 (0:00:00.857) 0:05:45.198 ********* 2025-07-12 13:51:54.190222 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:51:54.190227 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:51:54.190232 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:51:54.190238 | orchestrator | 2025-07-12 13:51:54.190243 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-07-12 13:51:54.190248 | orchestrator | Saturday 12 July 2025 13:51:14 +0000 (0:00:00.651) 0:05:45.849 ********* 2025-07-12 13:51:54.190253 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:51:54.190259 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:51:54.190264 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:51:54.190269 | orchestrator | 2025-07-12 13:51:54.190275 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-07-12 13:51:54.190283 | orchestrator | Saturday 12 July 2025 13:51:14 +0000 (0:00:00.349) 0:05:46.199 ********* 2025-07-12 13:51:54.190288 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:51:54.190293 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:51:54.190299 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:51:54.190304 | orchestrator | 2025-07-12 13:51:54.190309 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-07-12 13:51:54.190315 | orchestrator | Saturday 12 July 2025 13:51:15 +0000 (0:00:01.212) 0:05:47.412 ********* 2025-07-12 13:51:54.190320 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:51:54.190325 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:51:54.190330 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:51:54.190336 | orchestrator | 2025-07-12 13:51:54.190341 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-07-12 13:51:54.190346 | orchestrator | Saturday 12 July 2025 13:51:16 +0000 (0:00:00.885) 0:05:48.297 ********* 2025-07-12 13:51:54.190352 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:51:54.190357 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:51:54.190362 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:51:54.190368 | orchestrator | 2025-07-12 13:51:54.190373 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-07-12 13:51:54.190378 | orchestrator | Saturday 12 July 2025 13:51:17 +0000 (0:00:00.891) 0:05:49.188 ********* 2025-07-12 13:51:54.190384 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.190389 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.190394 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.190400 | orchestrator | 2025-07-12 13:51:54.190405 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-07-12 13:51:54.190410 | orchestrator | Saturday 12 July 2025 13:51:27 +0000 (0:00:09.463) 0:05:58.651 ********* 2025-07-12 13:51:54.190416 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:51:54.190421 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:51:54.190426 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:51:54.190431 | orchestrator | 2025-07-12 13:51:54.190437 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-07-12 13:51:54.190442 | orchestrator | Saturday 12 July 2025 13:51:27 +0000 (0:00:00.721) 0:05:59.373 ********* 2025-07-12 13:51:54.190447 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.190453 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.190462 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.190467 | orchestrator | 2025-07-12 13:51:54.190473 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-07-12 13:51:54.190478 | orchestrator | Saturday 12 July 2025 13:51:36 +0000 (0:00:08.401) 0:06:07.774 ********* 2025-07-12 13:51:54.190483 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:51:54.190488 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:51:54.190494 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:51:54.190499 | orchestrator | 2025-07-12 13:51:54.190504 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-07-12 13:51:54.190510 | orchestrator | Saturday 12 July 2025 13:51:41 +0000 (0:00:04.756) 0:06:12.531 ********* 2025-07-12 13:51:54.190515 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:51:54.190520 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:51:54.190526 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:51:54.190531 | orchestrator | 2025-07-12 13:51:54.190536 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-07-12 13:51:54.190556 | orchestrator | Saturday 12 July 2025 13:51:45 +0000 (0:00:04.334) 0:06:16.865 ********* 2025-07-12 13:51:54.190561 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.190567 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.190572 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.190579 | orchestrator | 2025-07-12 13:51:54.190589 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-07-12 13:51:54.190595 | orchestrator | Saturday 12 July 2025 13:51:45 +0000 (0:00:00.357) 0:06:17.223 ********* 2025-07-12 13:51:54.190600 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.190606 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.190611 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.190616 | orchestrator | 2025-07-12 13:51:54.190622 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-07-12 13:51:54.190627 | orchestrator | Saturday 12 July 2025 13:51:46 +0000 (0:00:00.714) 0:06:17.938 ********* 2025-07-12 13:51:54.190632 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.190637 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.190646 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.190651 | orchestrator | 2025-07-12 13:51:54.190657 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-07-12 13:51:54.190662 | orchestrator | Saturday 12 July 2025 13:51:46 +0000 (0:00:00.377) 0:06:18.315 ********* 2025-07-12 13:51:54.190667 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.190673 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.190678 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.190683 | orchestrator | 2025-07-12 13:51:54.190689 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-07-12 13:51:54.190694 | orchestrator | Saturday 12 July 2025 13:51:47 +0000 (0:00:00.352) 0:06:18.667 ********* 2025-07-12 13:51:54.190699 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.190705 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.190710 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.190715 | orchestrator | 2025-07-12 13:51:54.190720 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-07-12 13:51:54.190726 | orchestrator | Saturday 12 July 2025 13:51:47 +0000 (0:00:00.332) 0:06:19.000 ********* 2025-07-12 13:51:54.190731 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:51:54.190736 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:51:54.190741 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:51:54.190747 | orchestrator | 2025-07-12 13:51:54.190752 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-07-12 13:51:54.190757 | orchestrator | Saturday 12 July 2025 13:51:48 +0000 (0:00:00.647) 0:06:19.648 ********* 2025-07-12 13:51:54.190762 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:51:54.190768 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:51:54.190773 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:51:54.190783 | orchestrator | 2025-07-12 13:51:54.190788 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-07-12 13:51:54.190793 | orchestrator | Saturday 12 July 2025 13:51:51 +0000 (0:00:03.560) 0:06:23.208 ********* 2025-07-12 13:51:54.190802 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:51:54.190807 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:51:54.190812 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:51:54.190817 | orchestrator | 2025-07-12 13:51:54.190823 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:51:54.190828 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-07-12 13:51:54.190834 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-07-12 13:51:54.190839 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-07-12 13:51:54.190845 | orchestrator | 2025-07-12 13:51:54.190850 | orchestrator | 2025-07-12 13:51:54.190855 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:51:54.190861 | orchestrator | Saturday 12 July 2025 13:51:52 +0000 (0:00:00.818) 0:06:24.026 ********* 2025-07-12 13:51:54.190866 | orchestrator | =============================================================================== 2025-07-12 13:51:54.190871 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.46s 2025-07-12 13:51:54.190877 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.40s 2025-07-12 13:51:54.190882 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.53s 2025-07-12 13:51:54.190887 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 6.23s 2025-07-12 13:51:54.190892 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.60s 2025-07-12 13:51:54.190898 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.52s 2025-07-12 13:51:54.190903 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.76s 2025-07-12 13:51:54.190908 | orchestrator | loadbalancer : Copying over custom haproxy services configuration ------- 4.56s 2025-07-12 13:51:54.190913 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.49s 2025-07-12 13:51:54.190919 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 4.44s 2025-07-12 13:51:54.190924 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.40s 2025-07-12 13:51:54.190929 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.36s 2025-07-12 13:51:54.190934 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.34s 2025-07-12 13:51:54.190940 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.33s 2025-07-12 13:51:54.190945 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.22s 2025-07-12 13:51:54.190950 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.22s 2025-07-12 13:51:54.190955 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.21s 2025-07-12 13:51:54.190960 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.07s 2025-07-12 13:51:54.190966 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.01s 2025-07-12 13:51:54.190971 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 3.98s 2025-07-12 13:51:57.207612 | orchestrator | 2025-07-12 13:51:57 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:51:57.208591 | orchestrator | 2025-07-12 13:51:57 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:51:57.210385 | orchestrator | 2025-07-12 13:51:57 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:51:57.210492 | orchestrator | 2025-07-12 13:51:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:00.258244 | orchestrator | 2025-07-12 13:52:00 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:52:00.258351 | orchestrator | 2025-07-12 13:52:00 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:52:00.261779 | orchestrator | 2025-07-12 13:52:00 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:52:00.261808 | orchestrator | 2025-07-12 13:52:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:03.303579 | orchestrator | 2025-07-12 13:52:03 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:52:03.304085 | orchestrator | 2025-07-12 13:52:03 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:52:03.305001 | orchestrator | 2025-07-12 13:52:03 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:52:03.305029 | orchestrator | 2025-07-12 13:52:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:06.380393 | orchestrator | 2025-07-12 13:52:06 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:52:06.384185 | orchestrator | 2025-07-12 13:52:06 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:52:06.388315 | orchestrator | 2025-07-12 13:52:06 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:52:06.388342 | orchestrator | 2025-07-12 13:52:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:09.423768 | orchestrator | 2025-07-12 13:52:09 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:52:09.423873 | orchestrator | 2025-07-12 13:52:09 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:52:09.423888 | orchestrator | 2025-07-12 13:52:09 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:52:09.423899 | orchestrator | 2025-07-12 13:52:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:12.473194 | orchestrator | 2025-07-12 13:52:12 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:52:12.474772 | orchestrator | 2025-07-12 13:52:12 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:52:12.474967 | orchestrator | 2025-07-12 13:52:12 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:52:12.475045 | orchestrator | 2025-07-12 13:52:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:15.527186 | orchestrator | 2025-07-12 13:52:15 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:52:15.527324 | orchestrator | 2025-07-12 13:52:15 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:52:15.527350 | orchestrator | 2025-07-12 13:52:15 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:52:15.527369 | orchestrator | 2025-07-12 13:52:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:18.563845 | orchestrator | 2025-07-12 13:52:18 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:52:18.567481 | orchestrator | 2025-07-12 13:52:18 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:52:18.567984 | orchestrator | 2025-07-12 13:52:18 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:52:18.568005 | orchestrator | 2025-07-12 13:52:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:21.599737 | orchestrator | 2025-07-12 13:52:21 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:52:21.599861 | orchestrator | 2025-07-12 13:52:21 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:52:21.600613 | orchestrator | 2025-07-12 13:52:21 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:52:21.600650 | orchestrator | 2025-07-12 13:52:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:24.644199 | orchestrator | 2025-07-12 13:52:24 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:52:24.644315 | orchestrator | 2025-07-12 13:52:24 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:52:24.645961 | orchestrator | 2025-07-12 13:52:24 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:52:24.645992 | orchestrator | 2025-07-12 13:52:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:27.705253 | orchestrator | 2025-07-12 13:52:27 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:52:27.708157 | orchestrator | 2025-07-12 13:52:27 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:52:27.710128 | orchestrator | 2025-07-12 13:52:27 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:52:27.710645 | orchestrator | 2025-07-12 13:52:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:30.751321 | orchestrator | 2025-07-12 13:52:30 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:52:30.751427 | orchestrator | 2025-07-12 13:52:30 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:52:30.752044 | orchestrator | 2025-07-12 13:52:30 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:52:30.752583 | orchestrator | 2025-07-12 13:52:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:33.814698 | orchestrator | 2025-07-12 13:52:33 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:52:33.816549 | orchestrator | 2025-07-12 13:52:33 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:52:33.818364 | orchestrator | 2025-07-12 13:52:33 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:52:33.818743 | orchestrator | 2025-07-12 13:52:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:36.877934 | orchestrator | 2025-07-12 13:52:36 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:52:36.880991 | orchestrator | 2025-07-12 13:52:36 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:52:36.883184 | orchestrator | 2025-07-12 13:52:36 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:52:36.883411 | orchestrator | 2025-07-12 13:52:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:39.924919 | orchestrator | 2025-07-12 13:52:39 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:52:39.926586 | orchestrator | 2025-07-12 13:52:39 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:52:39.928332 | orchestrator | 2025-07-12 13:52:39 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:52:39.928467 | orchestrator | 2025-07-12 13:52:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:42.972713 | orchestrator | 2025-07-12 13:52:42 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:52:42.974194 | orchestrator | 2025-07-12 13:52:42 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:52:42.975568 | orchestrator | 2025-07-12 13:52:42 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:52:42.975641 | orchestrator | 2025-07-12 13:52:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:46.018810 | orchestrator | 2025-07-12 13:52:46 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:52:46.019624 | orchestrator | 2025-07-12 13:52:46 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:52:46.021362 | orchestrator | 2025-07-12 13:52:46 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:52:46.021556 | orchestrator | 2025-07-12 13:52:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:49.064645 | orchestrator | 2025-07-12 13:52:49 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:52:49.066644 | orchestrator | 2025-07-12 13:52:49 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:52:49.068867 | orchestrator | 2025-07-12 13:52:49 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:52:49.069199 | orchestrator | 2025-07-12 13:52:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:52.123254 | orchestrator | 2025-07-12 13:52:52 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:52:52.126419 | orchestrator | 2025-07-12 13:52:52 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:52:52.128400 | orchestrator | 2025-07-12 13:52:52 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:52:52.128422 | orchestrator | 2025-07-12 13:52:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:55.181599 | orchestrator | 2025-07-12 13:52:55 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:52:55.183155 | orchestrator | 2025-07-12 13:52:55 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:52:55.184959 | orchestrator | 2025-07-12 13:52:55 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:52:55.185869 | orchestrator | 2025-07-12 13:52:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:58.229706 | orchestrator | 2025-07-12 13:52:58 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:52:58.231256 | orchestrator | 2025-07-12 13:52:58 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:52:58.232556 | orchestrator | 2025-07-12 13:52:58 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:52:58.232599 | orchestrator | 2025-07-12 13:52:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:01.278409 | orchestrator | 2025-07-12 13:53:01 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:53:01.279043 | orchestrator | 2025-07-12 13:53:01 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:53:01.281100 | orchestrator | 2025-07-12 13:53:01 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:53:01.281123 | orchestrator | 2025-07-12 13:53:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:04.321705 | orchestrator | 2025-07-12 13:53:04 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:53:04.322403 | orchestrator | 2025-07-12 13:53:04 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:53:04.323142 | orchestrator | 2025-07-12 13:53:04 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:53:04.323181 | orchestrator | 2025-07-12 13:53:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:07.375009 | orchestrator | 2025-07-12 13:53:07 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:53:07.378391 | orchestrator | 2025-07-12 13:53:07 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:53:07.379963 | orchestrator | 2025-07-12 13:53:07 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:53:07.380002 | orchestrator | 2025-07-12 13:53:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:10.427976 | orchestrator | 2025-07-12 13:53:10 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:53:10.430599 | orchestrator | 2025-07-12 13:53:10 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:53:10.433704 | orchestrator | 2025-07-12 13:53:10 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:53:10.434273 | orchestrator | 2025-07-12 13:53:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:13.479857 | orchestrator | 2025-07-12 13:53:13 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:53:13.479974 | orchestrator | 2025-07-12 13:53:13 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:53:13.480661 | orchestrator | 2025-07-12 13:53:13 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:53:13.480685 | orchestrator | 2025-07-12 13:53:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:16.526561 | orchestrator | 2025-07-12 13:53:16 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:53:16.528088 | orchestrator | 2025-07-12 13:53:16 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:53:16.530297 | orchestrator | 2025-07-12 13:53:16 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:53:16.530343 | orchestrator | 2025-07-12 13:53:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:19.578143 | orchestrator | 2025-07-12 13:53:19 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:53:19.578245 | orchestrator | 2025-07-12 13:53:19 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:53:19.578978 | orchestrator | 2025-07-12 13:53:19 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:53:19.579004 | orchestrator | 2025-07-12 13:53:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:22.625922 | orchestrator | 2025-07-12 13:53:22 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:53:22.626358 | orchestrator | 2025-07-12 13:53:22 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:53:22.629147 | orchestrator | 2025-07-12 13:53:22 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:53:22.629238 | orchestrator | 2025-07-12 13:53:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:25.679196 | orchestrator | 2025-07-12 13:53:25 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:53:25.680978 | orchestrator | 2025-07-12 13:53:25 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:53:25.682438 | orchestrator | 2025-07-12 13:53:25 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:53:25.682469 | orchestrator | 2025-07-12 13:53:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:28.732879 | orchestrator | 2025-07-12 13:53:28 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:53:28.735211 | orchestrator | 2025-07-12 13:53:28 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:53:28.737571 | orchestrator | 2025-07-12 13:53:28 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:53:28.737608 | orchestrator | 2025-07-12 13:53:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:31.786648 | orchestrator | 2025-07-12 13:53:31 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:53:31.788333 | orchestrator | 2025-07-12 13:53:31 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:53:31.790120 | orchestrator | 2025-07-12 13:53:31 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:53:31.790244 | orchestrator | 2025-07-12 13:53:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:34.846432 | orchestrator | 2025-07-12 13:53:34 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:53:34.847864 | orchestrator | 2025-07-12 13:53:34 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:53:34.849691 | orchestrator | 2025-07-12 13:53:34 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:53:34.849730 | orchestrator | 2025-07-12 13:53:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:37.898454 | orchestrator | 2025-07-12 13:53:37 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:53:37.899975 | orchestrator | 2025-07-12 13:53:37 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:53:37.901893 | orchestrator | 2025-07-12 13:53:37 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:53:37.901916 | orchestrator | 2025-07-12 13:53:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:40.946139 | orchestrator | 2025-07-12 13:53:40 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:53:40.946698 | orchestrator | 2025-07-12 13:53:40 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:53:40.948069 | orchestrator | 2025-07-12 13:53:40 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:53:40.948099 | orchestrator | 2025-07-12 13:53:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:43.989818 | orchestrator | 2025-07-12 13:53:43 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:53:43.993201 | orchestrator | 2025-07-12 13:53:43 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:53:43.994943 | orchestrator | 2025-07-12 13:53:43 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:53:43.997439 | orchestrator | 2025-07-12 13:53:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:47.053111 | orchestrator | 2025-07-12 13:53:47 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:53:47.054229 | orchestrator | 2025-07-12 13:53:47 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:53:47.055686 | orchestrator | 2025-07-12 13:53:47 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:53:47.055710 | orchestrator | 2025-07-12 13:53:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:50.099932 | orchestrator | 2025-07-12 13:53:50 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:53:50.102136 | orchestrator | 2025-07-12 13:53:50 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:53:50.103357 | orchestrator | 2025-07-12 13:53:50 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:53:50.103686 | orchestrator | 2025-07-12 13:53:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:53.148858 | orchestrator | 2025-07-12 13:53:53 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:53:53.150175 | orchestrator | 2025-07-12 13:53:53 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:53:53.152420 | orchestrator | 2025-07-12 13:53:53 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:53:53.152864 | orchestrator | 2025-07-12 13:53:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:56.194337 | orchestrator | 2025-07-12 13:53:56 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:53:56.195133 | orchestrator | 2025-07-12 13:53:56 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:53:56.195978 | orchestrator | 2025-07-12 13:53:56 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:53:56.196018 | orchestrator | 2025-07-12 13:53:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:59.247881 | orchestrator | 2025-07-12 13:53:59 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:53:59.249689 | orchestrator | 2025-07-12 13:53:59 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:53:59.250920 | orchestrator | 2025-07-12 13:53:59 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:53:59.250957 | orchestrator | 2025-07-12 13:53:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:02.308408 | orchestrator | 2025-07-12 13:54:02 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:54:02.312055 | orchestrator | 2025-07-12 13:54:02 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:54:02.313948 | orchestrator | 2025-07-12 13:54:02 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:54:02.313975 | orchestrator | 2025-07-12 13:54:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:05.364773 | orchestrator | 2025-07-12 13:54:05 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:54:05.367039 | orchestrator | 2025-07-12 13:54:05 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:54:05.369512 | orchestrator | 2025-07-12 13:54:05 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:54:05.369537 | orchestrator | 2025-07-12 13:54:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:08.426562 | orchestrator | 2025-07-12 13:54:08 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:54:08.429137 | orchestrator | 2025-07-12 13:54:08 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:54:08.435188 | orchestrator | 2025-07-12 13:54:08 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:54:08.435274 | orchestrator | 2025-07-12 13:54:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:11.477142 | orchestrator | 2025-07-12 13:54:11 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:54:11.478345 | orchestrator | 2025-07-12 13:54:11 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:54:11.480243 | orchestrator | 2025-07-12 13:54:11 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:54:11.480279 | orchestrator | 2025-07-12 13:54:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:14.522255 | orchestrator | 2025-07-12 13:54:14 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:54:14.523689 | orchestrator | 2025-07-12 13:54:14 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:54:14.525440 | orchestrator | 2025-07-12 13:54:14 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:54:14.525760 | orchestrator | 2025-07-12 13:54:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:17.570858 | orchestrator | 2025-07-12 13:54:17 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:54:17.572690 | orchestrator | 2025-07-12 13:54:17 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:54:17.574545 | orchestrator | 2025-07-12 13:54:17 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:54:17.574578 | orchestrator | 2025-07-12 13:54:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:20.621218 | orchestrator | 2025-07-12 13:54:20 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:54:20.621939 | orchestrator | 2025-07-12 13:54:20 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:54:20.623710 | orchestrator | 2025-07-12 13:54:20 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:54:20.623745 | orchestrator | 2025-07-12 13:54:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:23.682312 | orchestrator | 2025-07-12 13:54:23 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:54:23.683037 | orchestrator | 2025-07-12 13:54:23 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state STARTED 2025-07-12 13:54:23.684494 | orchestrator | 2025-07-12 13:54:23 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:54:23.684533 | orchestrator | 2025-07-12 13:54:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:26.739995 | orchestrator | 2025-07-12 13:54:26 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:54:26.744664 | orchestrator | 2025-07-12 13:54:26 | INFO  | Task 6b27fb14-d1d8-4088-b016-ddd50bbe2964 is in state SUCCESS 2025-07-12 13:54:26.747230 | orchestrator | 2025-07-12 13:54:26.747276 | orchestrator | 2025-07-12 13:54:26.747290 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-07-12 13:54:26.747303 | orchestrator | 2025-07-12 13:54:26.747315 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-07-12 13:54:26.747326 | orchestrator | Saturday 12 July 2025 13:42:21 +0000 (0:00:00.712) 0:00:00.712 ********* 2025-07-12 13:54:26.747337 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.747349 | orchestrator | 2025-07-12 13:54:26.747361 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-07-12 13:54:26.747491 | orchestrator | Saturday 12 July 2025 13:42:22 +0000 (0:00:00.996) 0:00:01.708 ********* 2025-07-12 13:54:26.747507 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.747519 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.747529 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.747540 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.747699 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.747712 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.747722 | orchestrator | 2025-07-12 13:54:26.747733 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-07-12 13:54:26.747775 | orchestrator | Saturday 12 July 2025 13:42:23 +0000 (0:00:01.494) 0:00:03.203 ********* 2025-07-12 13:54:26.747786 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.747797 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.747808 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.747820 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.747832 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.747843 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.747855 | orchestrator | 2025-07-12 13:54:26.747867 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-07-12 13:54:26.747881 | orchestrator | Saturday 12 July 2025 13:42:24 +0000 (0:00:00.794) 0:00:03.998 ********* 2025-07-12 13:54:26.747894 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.747906 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.747917 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.747929 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.747941 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.747952 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.747964 | orchestrator | 2025-07-12 13:54:26.747976 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-07-12 13:54:26.747988 | orchestrator | Saturday 12 July 2025 13:42:25 +0000 (0:00:01.040) 0:00:05.038 ********* 2025-07-12 13:54:26.748000 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.748012 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.748024 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.748036 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.748046 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.748057 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.748068 | orchestrator | 2025-07-12 13:54:26.748079 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-07-12 13:54:26.748090 | orchestrator | Saturday 12 July 2025 13:42:26 +0000 (0:00:00.802) 0:00:05.841 ********* 2025-07-12 13:54:26.748100 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.748139 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.748151 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.748162 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.748172 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.748183 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.748243 | orchestrator | 2025-07-12 13:54:26.748256 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-07-12 13:54:26.748267 | orchestrator | Saturday 12 July 2025 13:42:27 +0000 (0:00:00.697) 0:00:06.538 ********* 2025-07-12 13:54:26.748278 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.748322 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.748334 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.748345 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.748355 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.748366 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.748415 | orchestrator | 2025-07-12 13:54:26.748427 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-07-12 13:54:26.748544 | orchestrator | Saturday 12 July 2025 13:42:27 +0000 (0:00:00.746) 0:00:07.285 ********* 2025-07-12 13:54:26.748559 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.748570 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.748581 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.748591 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.748612 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.748623 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.748633 | orchestrator | 2025-07-12 13:54:26.748644 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-07-12 13:54:26.748655 | orchestrator | Saturday 12 July 2025 13:42:28 +0000 (0:00:00.679) 0:00:07.965 ********* 2025-07-12 13:54:26.748665 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.748676 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.748686 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.748696 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.748707 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.748717 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.748729 | orchestrator | 2025-07-12 13:54:26.748739 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-07-12 13:54:26.748750 | orchestrator | Saturday 12 July 2025 13:42:29 +0000 (0:00:01.117) 0:00:09.082 ********* 2025-07-12 13:54:26.748761 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 13:54:26.748785 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 13:54:26.748797 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 13:54:26.748807 | orchestrator | 2025-07-12 13:54:26.748847 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-07-12 13:54:26.748859 | orchestrator | Saturday 12 July 2025 13:42:30 +0000 (0:00:00.877) 0:00:09.960 ********* 2025-07-12 13:54:26.748870 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.748881 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.748891 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.748901 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.748912 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.748922 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.748933 | orchestrator | 2025-07-12 13:54:26.748958 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-07-12 13:54:26.748970 | orchestrator | Saturday 12 July 2025 13:42:31 +0000 (0:00:01.344) 0:00:11.304 ********* 2025-07-12 13:54:26.748980 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 13:54:26.748996 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 13:54:26.749013 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 13:54:26.749038 | orchestrator | 2025-07-12 13:54:26.749176 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-07-12 13:54:26.749246 | orchestrator | Saturday 12 July 2025 13:42:34 +0000 (0:00:03.090) 0:00:14.394 ********* 2025-07-12 13:54:26.749341 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 13:54:26.749360 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 13:54:26.749373 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 13:54:26.749384 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.749423 | orchestrator | 2025-07-12 13:54:26.749436 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-07-12 13:54:26.749467 | orchestrator | Saturday 12 July 2025 13:42:35 +0000 (0:00:00.666) 0:00:15.061 ********* 2025-07-12 13:54:26.749480 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.749495 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.749506 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.749562 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.749574 | orchestrator | 2025-07-12 13:54:26.749585 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-07-12 13:54:26.749596 | orchestrator | Saturday 12 July 2025 13:42:36 +0000 (0:00:01.025) 0:00:16.087 ********* 2025-07-12 13:54:26.749609 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.749624 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.749635 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.749764 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.749801 | orchestrator | 2025-07-12 13:54:26.749880 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-07-12 13:54:26.749924 | orchestrator | Saturday 12 July 2025 13:42:37 +0000 (0:00:00.646) 0:00:16.733 ********* 2025-07-12 13:54:26.749947 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-07-12 13:42:32.485181', 'end': '2025-07-12 13:42:32.750743', 'delta': '0:00:00.265562', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.749975 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-07-12 13:42:33.670018', 'end': '2025-07-12 13:42:33.902001', 'delta': '0:00:00.231983', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.749987 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-07-12 13:42:34.448155', 'end': '2025-07-12 13:42:34.734513', 'delta': '0:00:00.286358', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.750006 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.750090 | orchestrator | 2025-07-12 13:54:26.750107 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-07-12 13:54:26.750118 | orchestrator | Saturday 12 July 2025 13:42:37 +0000 (0:00:00.269) 0:00:17.003 ********* 2025-07-12 13:54:26.750128 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.750139 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.750150 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.750160 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.750171 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.750218 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.750229 | orchestrator | 2025-07-12 13:54:26.750269 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-07-12 13:54:26.750288 | orchestrator | Saturday 12 July 2025 13:42:39 +0000 (0:00:01.990) 0:00:18.993 ********* 2025-07-12 13:54:26.750305 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.750352 | orchestrator | 2025-07-12 13:54:26.750370 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-07-12 13:54:26.750388 | orchestrator | Saturday 12 July 2025 13:42:40 +0000 (0:00:01.066) 0:00:20.059 ********* 2025-07-12 13:54:26.750572 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.750587 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.750598 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.750608 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.750619 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.750629 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.750640 | orchestrator | 2025-07-12 13:54:26.750651 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-07-12 13:54:26.750661 | orchestrator | Saturday 12 July 2025 13:42:41 +0000 (0:00:01.448) 0:00:21.507 ********* 2025-07-12 13:54:26.750672 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.750682 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.750692 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.750703 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.750713 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.750724 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.750734 | orchestrator | 2025-07-12 13:54:26.750745 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-12 13:54:26.750755 | orchestrator | Saturday 12 July 2025 13:42:43 +0000 (0:00:01.553) 0:00:23.061 ********* 2025-07-12 13:54:26.750766 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.750777 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.750787 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.750797 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.750808 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.750818 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.750829 | orchestrator | 2025-07-12 13:54:26.750839 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-07-12 13:54:26.750850 | orchestrator | Saturday 12 July 2025 13:42:44 +0000 (0:00:00.977) 0:00:24.039 ********* 2025-07-12 13:54:26.750861 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.750871 | orchestrator | 2025-07-12 13:54:26.750881 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-07-12 13:54:26.750892 | orchestrator | Saturday 12 July 2025 13:42:44 +0000 (0:00:00.239) 0:00:24.278 ********* 2025-07-12 13:54:26.750910 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.750920 | orchestrator | 2025-07-12 13:54:26.750931 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-12 13:54:26.750941 | orchestrator | Saturday 12 July 2025 13:42:44 +0000 (0:00:00.233) 0:00:24.512 ********* 2025-07-12 13:54:26.750962 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.751000 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.751012 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.751022 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.751062 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.751074 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.751084 | orchestrator | 2025-07-12 13:54:26.751095 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-07-12 13:54:26.751156 | orchestrator | Saturday 12 July 2025 13:42:45 +0000 (0:00:00.638) 0:00:25.150 ********* 2025-07-12 13:54:26.751170 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.751180 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.751191 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.751202 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.751212 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.751222 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.751233 | orchestrator | 2025-07-12 13:54:26.751244 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-07-12 13:54:26.751255 | orchestrator | Saturday 12 July 2025 13:42:46 +0000 (0:00:00.857) 0:00:26.008 ********* 2025-07-12 13:54:26.751265 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.751276 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.751286 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.751297 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.751307 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.751318 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.751328 | orchestrator | 2025-07-12 13:54:26.751338 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-07-12 13:54:26.751349 | orchestrator | Saturday 12 July 2025 13:42:47 +0000 (0:00:00.872) 0:00:26.882 ********* 2025-07-12 13:54:26.751360 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.751370 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.751380 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.751426 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.751438 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.751465 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.751517 | orchestrator | 2025-07-12 13:54:26.751528 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-07-12 13:54:26.751539 | orchestrator | Saturday 12 July 2025 13:42:48 +0000 (0:00:00.989) 0:00:27.871 ********* 2025-07-12 13:54:26.751550 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.751560 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.751571 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.751581 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.751592 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.751602 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.751612 | orchestrator | 2025-07-12 13:54:26.751623 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-07-12 13:54:26.751633 | orchestrator | Saturday 12 July 2025 13:42:49 +0000 (0:00:01.033) 0:00:28.905 ********* 2025-07-12 13:54:26.751644 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.751662 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.751680 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.751698 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.751716 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.751735 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.751747 | orchestrator | 2025-07-12 13:54:26.751758 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-07-12 13:54:26.751769 | orchestrator | Saturday 12 July 2025 13:42:50 +0000 (0:00:01.022) 0:00:29.927 ********* 2025-07-12 13:54:26.751779 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.751790 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.751800 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.751820 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.751830 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.751841 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.751851 | orchestrator | 2025-07-12 13:54:26.751862 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-07-12 13:54:26.751872 | orchestrator | Saturday 12 July 2025 13:42:51 +0000 (0:00:00.773) 0:00:30.701 ********* 2025-07-12 13:54:26.751884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.751896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.751913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.751925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.751978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.751991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aadb0376-3a3f-4e17-8a7c-6b5eb89f12e4', 'scsi-SQEMU_QEMU_HARDDISK_aadb0376-3a3f-4e17-8a7c-6b5eb89f12e4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aadb0376-3a3f-4e17-8a7c-6b5eb89f12e4-part1', 'scsi-SQEMU_QEMU_HARDDISK_aadb0376-3a3f-4e17-8a7c-6b5eb89f12e4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aadb0376-3a3f-4e17-8a7c-6b5eb89f12e4-part14', 'scsi-SQEMU_QEMU_HARDDISK_aadb0376-3a3f-4e17-8a7c-6b5eb89f12e4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aadb0376-3a3f-4e17-8a7c-6b5eb89f12e4-part15', 'scsi-SQEMU_QEMU_HARDDISK_aadb0376-3a3f-4e17-8a7c-6b5eb89f12e4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aadb0376-3a3f-4e17-8a7c-6b5eb89f12e4-part16', 'scsi-SQEMU_QEMU_HARDDISK_aadb0376-3a3f-4e17-8a7c-6b5eb89f12e4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:26.752115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:26.752129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7fde3ea-5d9a-4384-9c3f-e31c3a5c0c1c', 'scsi-SQEMU_QEMU_HARDDISK_e7fde3ea-5d9a-4384-9c3f-e31c3a5c0c1c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7fde3ea-5d9a-4384-9c3f-e31c3a5c0c1c-part1', 'scsi-SQEMU_QEMU_HARDDISK_e7fde3ea-5d9a-4384-9c3f-e31c3a5c0c1c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7fde3ea-5d9a-4384-9c3f-e31c3a5c0c1c-part14', 'scsi-SQEMU_QEMU_HARDDISK_e7fde3ea-5d9a-4384-9c3f-e31c3a5c0c1c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7fde3ea-5d9a-4384-9c3f-e31c3a5c0c1c-part15', 'scsi-SQEMU_QEMU_HARDDISK_e7fde3ea-5d9a-4384-9c3f-e31c3a5c0c1c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7fde3ea-5d9a-4384-9c3f-e31c3a5c0c1c-part16', 'scsi-SQEMU_QEMU_HARDDISK_e7fde3ea-5d9a-4384-9c3f-e31c3a5c0c1c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:26.752265 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.752277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:26.752288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2dbda7d9-a979-4bbd-9db7-ef0f0263b434', 'scsi-SQEMU_QEMU_HARDDISK_2dbda7d9-a979-4bbd-9db7-ef0f0263b434'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2dbda7d9-a979-4bbd-9db7-ef0f0263b434-part1', 'scsi-SQEMU_QEMU_HARDDISK_2dbda7d9-a979-4bbd-9db7-ef0f0263b434-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2dbda7d9-a979-4bbd-9db7-ef0f0263b434-part14', 'scsi-SQEMU_QEMU_HARDDISK_2dbda7d9-a979-4bbd-9db7-ef0f0263b434-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2dbda7d9-a979-4bbd-9db7-ef0f0263b434-part15', 'scsi-SQEMU_QEMU_HARDDISK_2dbda7d9-a979-4bbd-9db7-ef0f0263b434-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2dbda7d9-a979-4bbd-9db7-ef0f0263b434-part16', 'scsi-SQEMU_QEMU_HARDDISK_2dbda7d9-a979-4bbd-9db7-ef0f0263b434-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:26.752419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:26.752431 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.752442 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f86cb3d6--0e78--5b6a--8369--843476bf59dc-osd--block--f86cb3d6--0e78--5b6a--8369--843476bf59dc', 'dm-uuid-LVM-cOowIwi4ngbGyp4J1ONZ0QCO9jALxi4Uq1QblHIlw69fQDfMDIPDIfIejgLGHClo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752480 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8c07aa4b--79b5--5c8f--bb7a--3f1e0dfe1f2a-osd--block--8c07aa4b--79b5--5c8f--bb7a--3f1e0dfe1f2a', 'dm-uuid-LVM-ryVpm1vZGfdej5YU7k2fce5rcubHgJ30K2EonOshBiJNmKQEiOlu6ex7QlfnrVEr'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752500 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752511 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752522 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752544 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.752555 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752570 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752601 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70', 'scsi-SQEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70-part1', 'scsi-SQEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70-part14', 'scsi-SQEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70-part15', 'scsi-SQEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70-part16', 'scsi-SQEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:26.752637 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f86cb3d6--0e78--5b6a--8369--843476bf59dc-osd--block--f86cb3d6--0e78--5b6a--8369--843476bf59dc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JiGeg9-HB3s-KxsR-vAVJ-z7Up-5NSC-twJAJp', 'scsi-0QEMU_QEMU_HARDDISK_cf6824d0-2336-4864-a32f-bffef7606523', 'scsi-SQEMU_QEMU_HARDDISK_cf6824d0-2336-4864-a32f-bffef7606523'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:26.752656 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8be3c046--75c4--5df6--b59b--0076bb3a4ccd-osd--block--8be3c046--75c4--5df6--b59b--0076bb3a4ccd', 'dm-uuid-LVM-eNKlWRslYY1LPS1Lsl2a1zcjZPSHC2eEwz1DWsQznfBgygDIVbtNzXLvhZOCsm1i'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752668 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8c07aa4b--79b5--5c8f--bb7a--3f1e0dfe1f2a-osd--block--8c07aa4b--79b5--5c8f--bb7a--3f1e0dfe1f2a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vVYPqR-aOTR-bKmD-EWSo-w8X6-HkbQ-6enrQh', 'scsi-0QEMU_QEMU_HARDDISK_bad1a367-9870-4c1b-af18-4999b26662c8', 'scsi-SQEMU_QEMU_HARDDISK_bad1a367-9870-4c1b-af18-4999b26662c8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:26.752686 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f8ec8ce8--a083--5a5f--ae06--780cf5acbe42-osd--block--f8ec8ce8--a083--5a5f--ae06--780cf5acbe42', 'dm-uuid-LVM-iP66jRhrYjcnf87yxq9NTie5JOPBTSKdfp6rB9Iyv3FK2fgoUAfj6YBcRbmD3h7y'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752697 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec46bf14-c827-46d0-9a8c-19525aeacad6', 'scsi-SQEMU_QEMU_HARDDISK_ec46bf14-c827-46d0-9a8c-19525aeacad6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:26.752709 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752721 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:26.752732 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.752743 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752759 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752778 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752800 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752810 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752822 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--76cf46ce--80cb--5d18--8384--c0838affc5b6-osd--block--76cf46ce--80cb--5d18--8384--c0838affc5b6', 'dm-uuid-LVM-9E3Qoc7BCPXfuH39FeSqjLVWWsxKeexa5Wzect2iOuEg1v0e8lAoF6zGMmhtApJL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752833 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752844 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--465622e3--903d--5505--a41f--76599f0f3897-osd--block--465622e3--903d--5505--a41f--76599f0f3897', 'dm-uuid-LVM-SjqvmYAJNXJDCerOGLeDv7HFSAwwonW6KnAMGsmGSt2GjW35ncgGDsYraOXL2Weh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752871 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752891 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1', 'scsi-SQEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1-part1', 'scsi-SQEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1-part14', 'scsi-SQEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1-part15', 'scsi-SQEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1-part16', 'scsi-SQEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:26.752910 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8be3c046--75c4--5df6--b59b--0076bb3a4ccd-osd--block--8be3c046--75c4--5df6--b59b--0076bb3a4ccd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9521cP-6yAn-ReSb-qmNR-WXii-Wgkw-QXC1e3', 'scsi-0QEMU_QEMU_HARDDISK_b303d5ed-b20f-4882-90f3-23adead236a1', 'scsi-SQEMU_QEMU_HARDDISK_b303d5ed-b20f-4882-90f3-23adead236a1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:26.752923 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f8ec8ce8--a083--5a5f--ae06--780cf5acbe42-osd--block--f8ec8ce8--a083--5a5f--ae06--780cf5acbe42'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-INXKyE-D6vr-McjC-Eu2E-DpjP-73lP-XlCfb6', 'scsi-0QEMU_QEMU_HARDDISK_751344b4-b2fa-492b-b080-e9e5b4c67369', 'scsi-SQEMU_QEMU_HARDDISK_751344b4-b2fa-492b-b080-e9e5b4c67369'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:26.752939 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11616408-d26b-4882-b347-f5b812b9aa41', 'scsi-SQEMU_QEMU_HARDDISK_11616408-d26b-4882-b347-f5b812b9aa41'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:26.752959 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:26.752977 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.752992 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.753011 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.753028 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.753046 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.753064 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.753081 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.753097 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:26.753139 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968', 'scsi-SQEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968-part1', 'scsi-SQEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968-part14', 'scsi-SQEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968-part15', 'scsi-SQEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968-part16', 'scsi-SQEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:26.753171 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--76cf46ce--80cb--5d18--8384--c0838affc5b6-osd--block--76cf46ce--80cb--5d18--8384--c0838affc5b6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lQjeTc-DDG1-udOt-seuP-O91I-YAn0-aXReDq', 'scsi-0QEMU_QEMU_HARDDISK_375d9971-3091-4ee7-ad22-0f2ee4316c51', 'scsi-SQEMU_QEMU_HARDDISK_375d9971-3091-4ee7-ad22-0f2ee4316c51'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:26.753189 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--465622e3--903d--5505--a41f--76599f0f3897-osd--block--465622e3--903d--5505--a41f--76599f0f3897'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NoBQRK-UZsD-zyBB-c03p-ieDL-salv-zZqPVl', 'scsi-0QEMU_QEMU_HARDDISK_80678cc2-85df-4096-9cf9-3a4ced065123', 'scsi-SQEMU_QEMU_HARDDISK_80678cc2-85df-4096-9cf9-3a4ced065123'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:26.753206 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e2296ca-3498-48cf-a25a-293306b54174', 'scsi-SQEMU_QEMU_HARDDISK_1e2296ca-3498-48cf-a25a-293306b54174'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:26.753228 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:26.753262 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.753279 | orchestrator | 2025-07-12 13:54:26.753296 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-07-12 13:54:26.753315 | orchestrator | Saturday 12 July 2025 13:42:52 +0000 (0:00:01.758) 0:00:32.460 ********* 2025-07-12 13:54:26.753331 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.753349 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.753368 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.753385 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.753404 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.753429 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.753501 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.753523 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.753544 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aadb0376-3a3f-4e17-8a7c-6b5eb89f12e4', 'scsi-SQEMU_QEMU_HARDDISK_aadb0376-3a3f-4e17-8a7c-6b5eb89f12e4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aadb0376-3a3f-4e17-8a7c-6b5eb89f12e4-part1', 'scsi-SQEMU_QEMU_HARDDISK_aadb0376-3a3f-4e17-8a7c-6b5eb89f12e4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aadb0376-3a3f-4e17-8a7c-6b5eb89f12e4-part14', 'scsi-SQEMU_QEMU_HARDDISK_aadb0376-3a3f-4e17-8a7c-6b5eb89f12e4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aadb0376-3a3f-4e17-8a7c-6b5eb89f12e4-part15', 'scsi-SQEMU_QEMU_HARDDISK_aadb0376-3a3f-4e17-8a7c-6b5eb89f12e4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aadb0376-3a3f-4e17-8a7c-6b5eb89f12e4-part16', 'scsi-SQEMU_QEMU_HARDDISK_aadb0376-3a3f-4e17-8a7c-6b5eb89f12e4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.753590 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.753612 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.753632 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.753650 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.753662 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.753673 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.753684 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.753708 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.753727 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.753739 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.753750 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.753767 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7fde3ea-5d9a-4384-9c3f-e31c3a5c0c1c', 'scsi-SQEMU_QEMU_HARDDISK_e7fde3ea-5d9a-4384-9c3f-e31c3a5c0c1c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7fde3ea-5d9a-4384-9c3f-e31c3a5c0c1c-part1', 'scsi-SQEMU_QEMU_HARDDISK_e7fde3ea-5d9a-4384-9c3f-e31c3a5c0c1c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7fde3ea-5d9a-4384-9c3f-e31c3a5c0c1c-part14', 'scsi-SQEMU_QEMU_HARDDISK_e7fde3ea-5d9a-4384-9c3f-e31c3a5c0c1c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7fde3ea-5d9a-4384-9c3f-e31c3a5c0c1c-part15', 'scsi-SQEMU_QEMU_HARDDISK_e7fde3ea-5d9a-4384-9c3f-e31c3a5c0c1c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7fde3ea-5d9a-4384-9c3f-e31c3a5c0c1c-part16', 'scsi-SQEMU_QEMU_HARDDISK_e7fde3ea-5d9a-4384-9c3f-e31c3a5c0c1c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.753793 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.753805 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.753816 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.753827 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.753839 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.753856 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754094 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754117 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754130 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2dbda7d9-a979-4bbd-9db7-ef0f0263b434', 'scsi-SQEMU_QEMU_HARDDISK_2dbda7d9-a979-4bbd-9db7-ef0f0263b434'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2dbda7d9-a979-4bbd-9db7-ef0f0263b434-part1', 'scsi-SQEMU_QEMU_HARDDISK_2dbda7d9-a979-4bbd-9db7-ef0f0263b434-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2dbda7d9-a979-4bbd-9db7-ef0f0263b434-part14', 'scsi-SQEMU_QEMU_HARDDISK_2dbda7d9-a979-4bbd-9db7-ef0f0263b434-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2dbda7d9-a979-4bbd-9db7-ef0f0263b434-part15', 'scsi-SQEMU_QEMU_HARDDISK_2dbda7d9-a979-4bbd-9db7-ef0f0263b434-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2dbda7d9-a979-4bbd-9db7-ef0f0263b434-part16', 'scsi-SQEMU_QEMU_HARDDISK_2dbda7d9-a979-4bbd-9db7-ef0f0263b434-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754161 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754173 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.754191 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f86cb3d6--0e78--5b6a--8369--843476bf59dc-osd--block--f86cb3d6--0e78--5b6a--8369--843476bf59dc', 'dm-uuid-LVM-cOowIwi4ngbGyp4J1ONZ0QCO9jALxi4Uq1QblHIlw69fQDfMDIPDIfIejgLGHClo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754203 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8c07aa4b--79b5--5c8f--bb7a--3f1e0dfe1f2a-osd--block--8c07aa4b--79b5--5c8f--bb7a--3f1e0dfe1f2a', 'dm-uuid-LVM-ryVpm1vZGfdej5YU7k2fce5rcubHgJ30K2EonOshBiJNmKQEiOlu6ex7QlfnrVEr'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754215 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754227 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754244 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754260 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754278 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754289 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.754300 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754311 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754322 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754346 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70', 'scsi-SQEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70-part1', 'scsi-SQEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70-part14', 'scsi-SQEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70-part15', 'scsi-SQEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70-part16', 'scsi-SQEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754367 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f86cb3d6--0e78--5b6a--8369--843476bf59dc-osd--block--f86cb3d6--0e78--5b6a--8369--843476bf59dc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JiGeg9-HB3s-KxsR-vAVJ-z7Up-5NSC-twJAJp', 'scsi-0QEMU_QEMU_HARDDISK_cf6824d0-2336-4864-a32f-bffef7606523', 'scsi-SQEMU_QEMU_HARDDISK_cf6824d0-2336-4864-a32f-bffef7606523'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754378 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8be3c046--75c4--5df6--b59b--0076bb3a4ccd-osd--block--8be3c046--75c4--5df6--b59b--0076bb3a4ccd', 'dm-uuid-LVM-eNKlWRslYY1LPS1Lsl2a1zcjZPSHC2eEwz1DWsQznfBgygDIVbtNzXLvhZOCsm1i'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754390 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8c07aa4b--79b5--5c8f--bb7a--3f1e0dfe1f2a-osd--block--8c07aa4b--79b5--5c8f--bb7a--3f1e0dfe1f2a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vVYPqR-aOTR-bKmD-EWSo-w8X6-HkbQ-6enrQh', 'scsi-0QEMU_QEMU_HARDDISK_bad1a367-9870-4c1b-af18-4999b26662c8', 'scsi-SQEMU_QEMU_HARDDISK_bad1a367-9870-4c1b-af18-4999b26662c8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754412 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f8ec8ce8--a083--5a5f--ae06--780cf5acbe42-osd--block--f8ec8ce8--a083--5a5f--ae06--780cf5acbe42', 'dm-uuid-LVM-iP66jRhrYjcnf87yxq9NTie5JOPBTSKdfp6rB9Iyv3FK2fgoUAfj6YBcRbmD3h7y'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754431 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec46bf14-c827-46d0-9a8c-19525aeacad6', 'scsi-SQEMU_QEMU_HARDDISK_ec46bf14-c827-46d0-9a8c-19525aeacad6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754475 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754488 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754499 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754517 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.754528 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754544 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754562 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754574 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--76cf46ce--80cb--5d18--8384--c0838affc5b6-osd--block--76cf46ce--80cb--5d18--8384--c0838affc5b6', 'dm-uuid-LVM-9E3Qoc7BCPXfuH39FeSqjLVWWsxKeexa5Wzect2iOuEg1v0e8lAoF6zGMmhtApJL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754585 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754596 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--465622e3--903d--5505--a41f--76599f0f3897-osd--block--465622e3--903d--5505--a41f--76599f0f3897', 'dm-uuid-LVM-SjqvmYAJNXJDCerOGLeDv7HFSAwwonW6KnAMGsmGSt2GjW35ncgGDsYraOXL2Weh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754662 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754681 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754702 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754715 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754729 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1', 'scsi-SQEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1-part1', 'scsi-SQEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1-part14', 'scsi-SQEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1-part15', 'scsi-SQEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1-part16', 'scsi-SQEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754754 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754775 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8be3c046--75c4--5df6--b59b--0076bb3a4ccd-osd--block--8be3c046--75c4--5df6--b59b--0076bb3a4ccd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9521cP-6yAn-ReSb-qmNR-WXii-Wgkw-QXC1e3', 'scsi-0QEMU_QEMU_HARDDISK_b303d5ed-b20f-4882-90f3-23adead236a1', 'scsi-SQEMU_QEMU_HARDDISK_b303d5ed-b20f-4882-90f3-23adead236a1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754789 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754802 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f8ec8ce8--a083--5a5f--ae06--780cf5acbe42-osd--block--f8ec8ce8--a083--5a5f--ae06--780cf5acbe42'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-INXKyE-D6vr-McjC-Eu2E-DpjP-73lP-XlCfb6', 'scsi-0QEMU_QEMU_HARDDISK_751344b4-b2fa-492b-b080-e9e5b4c67369', 'scsi-SQEMU_QEMU_HARDDISK_751344b4-b2fa-492b-b080-e9e5b4c67369'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754820 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754836 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11616408-d26b-4882-b347-f5b812b9aa41', 'scsi-SQEMU_QEMU_HARDDISK_11616408-d26b-4882-b347-f5b812b9aa41'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754852 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754864 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754876 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754897 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.754908 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754931 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968', 'scsi-SQEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968-part1', 'scsi-SQEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968-part14', 'scsi-SQEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968-part15', 'scsi-SQEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968-part16', 'scsi-SQEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754944 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--76cf46ce--80cb--5d18--8384--c0838affc5b6-osd--block--76cf46ce--80cb--5d18--8384--c0838affc5b6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lQjeTc-DDG1-udOt-seuP-O91I-YAn0-aXReDq', 'scsi-0QEMU_QEMU_HARDDISK_375d9971-3091-4ee7-ad22-0f2ee4316c51', 'scsi-SQEMU_QEMU_HARDDISK_375d9971-3091-4ee7-ad22-0f2ee4316c51'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754956 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--465622e3--903d--5505--a41f--76599f0f3897-osd--block--465622e3--903d--5505--a41f--76599f0f3897'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NoBQRK-UZsD-zyBB-c03p-ieDL-salv-zZqPVl', 'scsi-0QEMU_QEMU_HARDDISK_80678cc2-85df-4096-9cf9-3a4ced065123', 'scsi-SQEMU_QEMU_HARDDISK_80678cc2-85df-4096-9cf9-3a4ced065123'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754974 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e2296ca-3498-48cf-a25a-293306b54174', 'scsi-SQEMU_QEMU_HARDDISK_1e2296ca-3498-48cf-a25a-293306b54174'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.754990 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:26.755001 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.755012 | orchestrator | 2025-07-12 13:54:26.755024 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-07-12 13:54:26.755035 | orchestrator | Saturday 12 July 2025 13:42:54 +0000 (0:00:01.735) 0:00:34.196 ********* 2025-07-12 13:54:26.755045 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.755057 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.755067 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.755083 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.755094 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.755105 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.755115 | orchestrator | 2025-07-12 13:54:26.755126 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-07-12 13:54:26.755137 | orchestrator | Saturday 12 July 2025 13:42:55 +0000 (0:00:01.274) 0:00:35.470 ********* 2025-07-12 13:54:26.755148 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.755158 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.755169 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.755179 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.755189 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.755199 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.755210 | orchestrator | 2025-07-12 13:54:26.755221 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-12 13:54:26.755231 | orchestrator | Saturday 12 July 2025 13:42:56 +0000 (0:00:00.871) 0:00:36.341 ********* 2025-07-12 13:54:26.755248 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.755259 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.755269 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.755280 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.755290 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.755301 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.755311 | orchestrator | 2025-07-12 13:54:26.755322 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-12 13:54:26.755332 | orchestrator | Saturday 12 July 2025 13:42:57 +0000 (0:00:01.150) 0:00:37.492 ********* 2025-07-12 13:54:26.755343 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.755353 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.755364 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.755374 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.755385 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.755395 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.755406 | orchestrator | 2025-07-12 13:54:26.755416 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-12 13:54:26.755427 | orchestrator | Saturday 12 July 2025 13:42:58 +0000 (0:00:00.695) 0:00:38.188 ********* 2025-07-12 13:54:26.755437 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.755470 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.755482 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.755492 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.755503 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.755514 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.755524 | orchestrator | 2025-07-12 13:54:26.755535 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-12 13:54:26.755546 | orchestrator | Saturday 12 July 2025 13:42:59 +0000 (0:00:01.263) 0:00:39.452 ********* 2025-07-12 13:54:26.755557 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.755567 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.755578 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.755588 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.755599 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.755609 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.755620 | orchestrator | 2025-07-12 13:54:26.755631 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-07-12 13:54:26.755641 | orchestrator | Saturday 12 July 2025 13:43:01 +0000 (0:00:01.319) 0:00:40.772 ********* 2025-07-12 13:54:26.755652 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 13:54:26.755663 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-07-12 13:54:26.755673 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-07-12 13:54:26.755684 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-07-12 13:54:26.755694 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-07-12 13:54:26.755705 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-07-12 13:54:26.755715 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-07-12 13:54:26.755726 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-07-12 13:54:26.755736 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-07-12 13:54:26.755746 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-07-12 13:54:26.755757 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-07-12 13:54:26.755767 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-07-12 13:54:26.755778 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-07-12 13:54:26.755788 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-07-12 13:54:26.755799 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-07-12 13:54:26.755809 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-07-12 13:54:26.755820 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-07-12 13:54:26.755830 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-07-12 13:54:26.755847 | orchestrator | 2025-07-12 13:54:26.755858 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-07-12 13:54:26.755869 | orchestrator | Saturday 12 July 2025 13:43:04 +0000 (0:00:03.675) 0:00:44.447 ********* 2025-07-12 13:54:26.755880 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 13:54:26.755890 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 13:54:26.755901 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 13:54:26.755916 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.755927 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-07-12 13:54:26.755937 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-07-12 13:54:26.755948 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-07-12 13:54:26.755958 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.755969 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-07-12 13:54:26.755979 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-07-12 13:54:26.755990 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-07-12 13:54:26.756001 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.756017 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-12 13:54:26.756028 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-12 13:54:26.756039 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-12 13:54:26.756049 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.756060 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-07-12 13:54:26.756070 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-07-12 13:54:26.756081 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-07-12 13:54:26.756091 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.756102 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-07-12 13:54:26.756112 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-07-12 13:54:26.756123 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-07-12 13:54:26.756133 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.756144 | orchestrator | 2025-07-12 13:54:26.756155 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-07-12 13:54:26.756165 | orchestrator | Saturday 12 July 2025 13:43:05 +0000 (0:00:00.726) 0:00:45.174 ********* 2025-07-12 13:54:26.756176 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.756186 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.756197 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.756208 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.756218 | orchestrator | 2025-07-12 13:54:26.756229 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-07-12 13:54:26.756241 | orchestrator | Saturday 12 July 2025 13:43:06 +0000 (0:00:01.059) 0:00:46.233 ********* 2025-07-12 13:54:26.756251 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.756262 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.756272 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.756283 | orchestrator | 2025-07-12 13:54:26.756293 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-07-12 13:54:26.756304 | orchestrator | Saturday 12 July 2025 13:43:07 +0000 (0:00:00.445) 0:00:46.679 ********* 2025-07-12 13:54:26.756315 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.756325 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.756336 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.756346 | orchestrator | 2025-07-12 13:54:26.756357 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-07-12 13:54:26.756374 | orchestrator | Saturday 12 July 2025 13:43:07 +0000 (0:00:00.446) 0:00:47.125 ********* 2025-07-12 13:54:26.756385 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.756396 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.756406 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.756417 | orchestrator | 2025-07-12 13:54:26.756428 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-07-12 13:54:26.756439 | orchestrator | Saturday 12 July 2025 13:43:07 +0000 (0:00:00.357) 0:00:47.483 ********* 2025-07-12 13:54:26.756467 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.756478 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.756488 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.756499 | orchestrator | 2025-07-12 13:54:26.756510 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-07-12 13:54:26.756520 | orchestrator | Saturday 12 July 2025 13:43:08 +0000 (0:00:00.627) 0:00:48.110 ********* 2025-07-12 13:54:26.756531 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 13:54:26.756541 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 13:54:26.756552 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 13:54:26.756563 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.756573 | orchestrator | 2025-07-12 13:54:26.756584 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-07-12 13:54:26.756594 | orchestrator | Saturday 12 July 2025 13:43:09 +0000 (0:00:00.449) 0:00:48.560 ********* 2025-07-12 13:54:26.756605 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 13:54:26.756616 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 13:54:26.756626 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 13:54:26.756637 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.756647 | orchestrator | 2025-07-12 13:54:26.756658 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-07-12 13:54:26.756669 | orchestrator | Saturday 12 July 2025 13:43:09 +0000 (0:00:00.494) 0:00:49.054 ********* 2025-07-12 13:54:26.756679 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 13:54:26.756690 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 13:54:26.756700 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 13:54:26.756711 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.756721 | orchestrator | 2025-07-12 13:54:26.756732 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-07-12 13:54:26.756743 | orchestrator | Saturday 12 July 2025 13:43:10 +0000 (0:00:00.684) 0:00:49.739 ********* 2025-07-12 13:54:26.756753 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.756769 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.756779 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.756790 | orchestrator | 2025-07-12 13:54:26.756801 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-07-12 13:54:26.756812 | orchestrator | Saturday 12 July 2025 13:43:10 +0000 (0:00:00.577) 0:00:50.316 ********* 2025-07-12 13:54:26.756822 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-07-12 13:54:26.756833 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-07-12 13:54:26.756844 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-07-12 13:54:26.756854 | orchestrator | 2025-07-12 13:54:26.756865 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-07-12 13:54:26.756876 | orchestrator | Saturday 12 July 2025 13:43:11 +0000 (0:00:00.715) 0:00:51.032 ********* 2025-07-12 13:54:26.756891 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 13:54:26.756903 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 13:54:26.756914 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 13:54:26.756924 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-07-12 13:54:26.756942 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-12 13:54:26.756953 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-12 13:54:26.756963 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-12 13:54:26.756974 | orchestrator | 2025-07-12 13:54:26.756985 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-07-12 13:54:26.756996 | orchestrator | Saturday 12 July 2025 13:43:12 +0000 (0:00:00.963) 0:00:51.995 ********* 2025-07-12 13:54:26.757006 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 13:54:26.757017 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 13:54:26.757027 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 13:54:26.757038 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-07-12 13:54:26.757049 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-12 13:54:26.757059 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-12 13:54:26.757069 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-12 13:54:26.757080 | orchestrator | 2025-07-12 13:54:26.757091 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 13:54:26.757101 | orchestrator | Saturday 12 July 2025 13:43:14 +0000 (0:00:01.840) 0:00:53.836 ********* 2025-07-12 13:54:26.757112 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.757124 | orchestrator | 2025-07-12 13:54:26.757135 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 13:54:26.757146 | orchestrator | Saturday 12 July 2025 13:43:15 +0000 (0:00:01.226) 0:00:55.062 ********* 2025-07-12 13:54:26.757157 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.757167 | orchestrator | 2025-07-12 13:54:26.757178 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 13:54:26.757189 | orchestrator | Saturday 12 July 2025 13:43:16 +0000 (0:00:01.041) 0:00:56.103 ********* 2025-07-12 13:54:26.757199 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.757210 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.757220 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.757231 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.757241 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.757252 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.757263 | orchestrator | 2025-07-12 13:54:26.757274 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 13:54:26.757284 | orchestrator | Saturday 12 July 2025 13:43:17 +0000 (0:00:01.025) 0:00:57.128 ********* 2025-07-12 13:54:26.757295 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.757305 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.757316 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.757327 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.757337 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.757348 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.757359 | orchestrator | 2025-07-12 13:54:26.757370 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 13:54:26.757380 | orchestrator | Saturday 12 July 2025 13:43:18 +0000 (0:00:01.266) 0:00:58.395 ********* 2025-07-12 13:54:26.757391 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.757401 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.757412 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.757422 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.757439 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.757468 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.757479 | orchestrator | 2025-07-12 13:54:26.757491 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 13:54:26.757501 | orchestrator | Saturday 12 July 2025 13:43:20 +0000 (0:00:01.489) 0:00:59.884 ********* 2025-07-12 13:54:26.757512 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.757523 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.757533 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.757544 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.757554 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.757565 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.757575 | orchestrator | 2025-07-12 13:54:26.757586 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 13:54:26.757597 | orchestrator | Saturday 12 July 2025 13:43:21 +0000 (0:00:01.351) 0:01:01.235 ********* 2025-07-12 13:54:26.757608 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.757618 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.757629 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.757639 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.757650 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.757661 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.757671 | orchestrator | 2025-07-12 13:54:26.757682 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 13:54:26.757693 | orchestrator | Saturday 12 July 2025 13:43:23 +0000 (0:00:01.313) 0:01:02.549 ********* 2025-07-12 13:54:26.757709 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.757720 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.757731 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.757741 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.757752 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.757762 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.757773 | orchestrator | 2025-07-12 13:54:26.757784 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 13:54:26.757878 | orchestrator | Saturday 12 July 2025 13:43:23 +0000 (0:00:00.696) 0:01:03.246 ********* 2025-07-12 13:54:26.757900 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.757911 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.757921 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.757932 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.757942 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.757952 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.757963 | orchestrator | 2025-07-12 13:54:26.757974 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 13:54:26.757984 | orchestrator | Saturday 12 July 2025 13:43:24 +0000 (0:00:01.176) 0:01:04.422 ********* 2025-07-12 13:54:26.757995 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.758006 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.758068 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.758082 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.758093 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.758103 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.758114 | orchestrator | 2025-07-12 13:54:26.758126 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 13:54:26.758143 | orchestrator | Saturday 12 July 2025 13:43:26 +0000 (0:00:01.362) 0:01:05.784 ********* 2025-07-12 13:54:26.758159 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.758169 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.758180 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.758190 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.758200 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.758210 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.758221 | orchestrator | 2025-07-12 13:54:26.758232 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 13:54:26.758251 | orchestrator | Saturday 12 July 2025 13:43:28 +0000 (0:00:01.864) 0:01:07.649 ********* 2025-07-12 13:54:26.758261 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.758272 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.758282 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.758293 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.758303 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.758314 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.758324 | orchestrator | 2025-07-12 13:54:26.758335 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 13:54:26.758345 | orchestrator | Saturday 12 July 2025 13:43:29 +0000 (0:00:00.942) 0:01:08.592 ********* 2025-07-12 13:54:26.758356 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.758366 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.758377 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.758387 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.758398 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.758408 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.758419 | orchestrator | 2025-07-12 13:54:26.758430 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 13:54:26.758440 | orchestrator | Saturday 12 July 2025 13:43:30 +0000 (0:00:01.631) 0:01:10.223 ********* 2025-07-12 13:54:26.758510 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.758522 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.758532 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.758543 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.758553 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.758564 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.758574 | orchestrator | 2025-07-12 13:54:26.758585 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 13:54:26.758596 | orchestrator | Saturday 12 July 2025 13:43:31 +0000 (0:00:00.862) 0:01:11.085 ********* 2025-07-12 13:54:26.758606 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.758617 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.758627 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.758638 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.758648 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.758659 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.758669 | orchestrator | 2025-07-12 13:54:26.758680 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 13:54:26.758691 | orchestrator | Saturday 12 July 2025 13:43:32 +0000 (0:00:00.945) 0:01:12.031 ********* 2025-07-12 13:54:26.758701 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.758712 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.758722 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.758732 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.758743 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.758753 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.758764 | orchestrator | 2025-07-12 13:54:26.758774 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 13:54:26.758785 | orchestrator | Saturday 12 July 2025 13:43:33 +0000 (0:00:00.862) 0:01:12.894 ********* 2025-07-12 13:54:26.758795 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.758806 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.758817 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.758849 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.758860 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.758871 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.758881 | orchestrator | 2025-07-12 13:54:26.758898 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 13:54:26.758908 | orchestrator | Saturday 12 July 2025 13:43:34 +0000 (0:00:01.139) 0:01:14.033 ********* 2025-07-12 13:54:26.758919 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.758930 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.758948 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.758958 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.758969 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.758979 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.758989 | orchestrator | 2025-07-12 13:54:26.759000 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 13:54:26.759028 | orchestrator | Saturday 12 July 2025 13:43:35 +0000 (0:00:00.524) 0:01:14.558 ********* 2025-07-12 13:54:26.759038 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.759048 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.759057 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.759067 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.759076 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.759085 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.759095 | orchestrator | 2025-07-12 13:54:26.759105 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 13:54:26.759114 | orchestrator | Saturday 12 July 2025 13:43:35 +0000 (0:00:00.660) 0:01:15.218 ********* 2025-07-12 13:54:26.759124 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.759133 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.759142 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.759152 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.759161 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.759170 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.759180 | orchestrator | 2025-07-12 13:54:26.759189 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 13:54:26.759199 | orchestrator | Saturday 12 July 2025 13:43:36 +0000 (0:00:00.544) 0:01:15.763 ********* 2025-07-12 13:54:26.759208 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.759217 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.759227 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.759236 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.759245 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.759254 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.759264 | orchestrator | 2025-07-12 13:54:26.759273 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-07-12 13:54:26.759283 | orchestrator | Saturday 12 July 2025 13:43:37 +0000 (0:00:01.060) 0:01:16.824 ********* 2025-07-12 13:54:26.759292 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:26.759301 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:26.759311 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:26.759320 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:26.759329 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:26.759339 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:26.759348 | orchestrator | 2025-07-12 13:54:26.759357 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-07-12 13:54:26.759367 | orchestrator | Saturday 12 July 2025 13:43:38 +0000 (0:00:01.570) 0:01:18.394 ********* 2025-07-12 13:54:26.759376 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:26.759386 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:26.759395 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:26.759404 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:26.759414 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:26.759423 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:26.759433 | orchestrator | 2025-07-12 13:54:26.759442 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-07-12 13:54:26.759467 | orchestrator | Saturday 12 July 2025 13:43:41 +0000 (0:00:02.231) 0:01:20.626 ********* 2025-07-12 13:54:26.759477 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.759487 | orchestrator | 2025-07-12 13:54:26.759496 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-07-12 13:54:26.759506 | orchestrator | Saturday 12 July 2025 13:43:42 +0000 (0:00:01.411) 0:01:22.037 ********* 2025-07-12 13:54:26.759526 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.759536 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.759546 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.759555 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.759564 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.759574 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.759583 | orchestrator | 2025-07-12 13:54:26.759593 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-07-12 13:54:26.759602 | orchestrator | Saturday 12 July 2025 13:43:43 +0000 (0:00:00.894) 0:01:22.931 ********* 2025-07-12 13:54:26.759611 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.759621 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.759630 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.759640 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.759649 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.759658 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.759668 | orchestrator | 2025-07-12 13:54:26.759677 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-07-12 13:54:26.759687 | orchestrator | Saturday 12 July 2025 13:43:44 +0000 (0:00:00.665) 0:01:23.597 ********* 2025-07-12 13:54:26.759696 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-12 13:54:26.759706 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-12 13:54:26.759715 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-12 13:54:26.759725 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-12 13:54:26.759734 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-12 13:54:26.759744 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-12 13:54:26.759757 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-12 13:54:26.759767 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-12 13:54:26.759776 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-12 13:54:26.759786 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-12 13:54:26.759795 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-12 13:54:26.759805 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-12 13:54:26.759814 | orchestrator | 2025-07-12 13:54:26.759830 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-07-12 13:54:26.759840 | orchestrator | Saturday 12 July 2025 13:43:45 +0000 (0:00:01.725) 0:01:25.322 ********* 2025-07-12 13:54:26.759849 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:26.759859 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:26.759868 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:26.759878 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:26.759887 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:26.759897 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:26.759906 | orchestrator | 2025-07-12 13:54:26.759915 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-07-12 13:54:26.759925 | orchestrator | Saturday 12 July 2025 13:43:46 +0000 (0:00:01.084) 0:01:26.407 ********* 2025-07-12 13:54:26.759934 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.759944 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.759953 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.759962 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.759972 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.759981 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.759991 | orchestrator | 2025-07-12 13:54:26.760000 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-07-12 13:54:26.760016 | orchestrator | Saturday 12 July 2025 13:43:47 +0000 (0:00:00.913) 0:01:27.320 ********* 2025-07-12 13:54:26.760026 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.760035 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.760045 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.760054 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.760063 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.760073 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.760082 | orchestrator | 2025-07-12 13:54:26.760091 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-07-12 13:54:26.760101 | orchestrator | Saturday 12 July 2025 13:43:48 +0000 (0:00:00.644) 0:01:27.964 ********* 2025-07-12 13:54:26.760110 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.760120 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.760129 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.760138 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.760148 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.760157 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.760166 | orchestrator | 2025-07-12 13:54:26.760176 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-07-12 13:54:26.760185 | orchestrator | Saturday 12 July 2025 13:43:49 +0000 (0:00:00.824) 0:01:28.788 ********* 2025-07-12 13:54:26.760195 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.760205 | orchestrator | 2025-07-12 13:54:26.760214 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-07-12 13:54:26.760224 | orchestrator | Saturday 12 July 2025 13:43:50 +0000 (0:00:01.214) 0:01:30.003 ********* 2025-07-12 13:54:26.760233 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.760243 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.760252 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.760262 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.760271 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.760281 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.760290 | orchestrator | 2025-07-12 13:54:26.760300 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-07-12 13:54:26.760309 | orchestrator | Saturday 12 July 2025 13:45:40 +0000 (0:01:50.206) 0:03:20.209 ********* 2025-07-12 13:54:26.760319 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-12 13:54:26.760328 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-12 13:54:26.760337 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-12 13:54:26.760347 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.760356 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-12 13:54:26.760366 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-12 13:54:26.760375 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-12 13:54:26.760384 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.760394 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-12 13:54:26.760403 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-12 13:54:26.760413 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-12 13:54:26.760422 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.760431 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-12 13:54:26.760441 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-12 13:54:26.760465 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-12 13:54:26.760481 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.760495 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-12 13:54:26.760505 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-12 13:54:26.760514 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-12 13:54:26.760524 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.760534 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-12 13:54:26.760543 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-12 13:54:26.760552 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-12 13:54:26.760568 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.760579 | orchestrator | 2025-07-12 13:54:26.760588 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-07-12 13:54:26.760597 | orchestrator | Saturday 12 July 2025 13:45:41 +0000 (0:00:01.110) 0:03:21.320 ********* 2025-07-12 13:54:26.760607 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.760616 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.760626 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.760635 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.760645 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.760654 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.760664 | orchestrator | 2025-07-12 13:54:26.760673 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-07-12 13:54:26.760683 | orchestrator | Saturday 12 July 2025 13:45:42 +0000 (0:00:00.738) 0:03:22.058 ********* 2025-07-12 13:54:26.760693 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.760702 | orchestrator | 2025-07-12 13:54:26.760711 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-07-12 13:54:26.760728 | orchestrator | Saturday 12 July 2025 13:45:42 +0000 (0:00:00.177) 0:03:22.236 ********* 2025-07-12 13:54:26.760745 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.760763 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.760793 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.760807 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.760821 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.760836 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.760852 | orchestrator | 2025-07-12 13:54:26.760867 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-07-12 13:54:26.760881 | orchestrator | Saturday 12 July 2025 13:45:43 +0000 (0:00:01.059) 0:03:23.295 ********* 2025-07-12 13:54:26.760895 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.760908 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.760922 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.760936 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.760950 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.760965 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.760981 | orchestrator | 2025-07-12 13:54:26.760996 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-07-12 13:54:26.761011 | orchestrator | Saturday 12 July 2025 13:45:44 +0000 (0:00:00.651) 0:03:23.946 ********* 2025-07-12 13:54:26.761027 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.761042 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.761058 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.761073 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.761089 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.761104 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.761121 | orchestrator | 2025-07-12 13:54:26.761133 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-07-12 13:54:26.761142 | orchestrator | Saturday 12 July 2025 13:45:45 +0000 (0:00:00.748) 0:03:24.695 ********* 2025-07-12 13:54:26.761152 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.761171 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.761180 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.761189 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.761198 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.761208 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.761217 | orchestrator | 2025-07-12 13:54:26.761227 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-07-12 13:54:26.761237 | orchestrator | Saturday 12 July 2025 13:45:47 +0000 (0:00:02.639) 0:03:27.335 ********* 2025-07-12 13:54:26.761246 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.761255 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.761264 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.761274 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.761283 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.761292 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.761301 | orchestrator | 2025-07-12 13:54:26.761311 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-07-12 13:54:26.761320 | orchestrator | Saturday 12 July 2025 13:45:48 +0000 (0:00:00.818) 0:03:28.153 ********* 2025-07-12 13:54:26.761330 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.761341 | orchestrator | 2025-07-12 13:54:26.761350 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-07-12 13:54:26.761360 | orchestrator | Saturday 12 July 2025 13:45:49 +0000 (0:00:01.114) 0:03:29.268 ********* 2025-07-12 13:54:26.761369 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.761379 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.761388 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.761397 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.761406 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.761416 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.761425 | orchestrator | 2025-07-12 13:54:26.761434 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-07-12 13:54:26.761497 | orchestrator | Saturday 12 July 2025 13:45:50 +0000 (0:00:00.573) 0:03:29.842 ********* 2025-07-12 13:54:26.761508 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.761518 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.761527 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.761536 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.761546 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.761561 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.761571 | orchestrator | 2025-07-12 13:54:26.761580 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-07-12 13:54:26.761589 | orchestrator | Saturday 12 July 2025 13:45:51 +0000 (0:00:00.702) 0:03:30.544 ********* 2025-07-12 13:54:26.761599 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.761608 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.761617 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.761627 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.761636 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.761645 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.761654 | orchestrator | 2025-07-12 13:54:26.761664 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-07-12 13:54:26.761682 | orchestrator | Saturday 12 July 2025 13:45:51 +0000 (0:00:00.580) 0:03:31.124 ********* 2025-07-12 13:54:26.761692 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.761701 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.761710 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.761720 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.761729 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.761738 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.761748 | orchestrator | 2025-07-12 13:54:26.761757 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-07-12 13:54:26.761773 | orchestrator | Saturday 12 July 2025 13:45:52 +0000 (0:00:00.892) 0:03:32.017 ********* 2025-07-12 13:54:26.761782 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.761792 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.761801 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.761810 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.761820 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.761829 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.761838 | orchestrator | 2025-07-12 13:54:26.761848 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-07-12 13:54:26.761857 | orchestrator | Saturday 12 July 2025 13:45:53 +0000 (0:00:00.765) 0:03:32.783 ********* 2025-07-12 13:54:26.761866 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.761876 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.761885 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.761894 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.761904 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.761913 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.761922 | orchestrator | 2025-07-12 13:54:26.761932 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-07-12 13:54:26.761941 | orchestrator | Saturday 12 July 2025 13:45:54 +0000 (0:00:01.104) 0:03:33.888 ********* 2025-07-12 13:54:26.761950 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.761960 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.761969 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.761978 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.761987 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.761997 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.762006 | orchestrator | 2025-07-12 13:54:26.762048 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-07-12 13:54:26.762058 | orchestrator | Saturday 12 July 2025 13:45:54 +0000 (0:00:00.622) 0:03:34.510 ********* 2025-07-12 13:54:26.762066 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.762073 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.762081 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.762088 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.762096 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.762104 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.762112 | orchestrator | 2025-07-12 13:54:26.762120 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-07-12 13:54:26.762127 | orchestrator | Saturday 12 July 2025 13:45:55 +0000 (0:00:00.774) 0:03:35.285 ********* 2025-07-12 13:54:26.762135 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.762143 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.762150 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.762158 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.762166 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.762173 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.762181 | orchestrator | 2025-07-12 13:54:26.762189 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-07-12 13:54:26.762196 | orchestrator | Saturday 12 July 2025 13:45:56 +0000 (0:00:01.038) 0:03:36.323 ********* 2025-07-12 13:54:26.762204 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.762212 | orchestrator | 2025-07-12 13:54:26.762220 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-07-12 13:54:26.762227 | orchestrator | Saturday 12 July 2025 13:45:57 +0000 (0:00:00.967) 0:03:37.291 ********* 2025-07-12 13:54:26.762235 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-07-12 13:54:26.762243 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-07-12 13:54:26.762251 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-07-12 13:54:26.762263 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-07-12 13:54:26.762271 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-07-12 13:54:26.762279 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-07-12 13:54:26.762287 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-07-12 13:54:26.762294 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-07-12 13:54:26.762302 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-07-12 13:54:26.762310 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-07-12 13:54:26.762317 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-07-12 13:54:26.762325 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-07-12 13:54:26.762333 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-07-12 13:54:26.762340 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-07-12 13:54:26.762352 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-07-12 13:54:26.762359 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-07-12 13:54:26.762367 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-07-12 13:54:26.762375 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-07-12 13:54:26.762382 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-07-12 13:54:26.762390 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-07-12 13:54:26.762398 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-07-12 13:54:26.762418 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-07-12 13:54:26.762426 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-07-12 13:54:26.762433 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-07-12 13:54:26.762441 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-07-12 13:54:26.762464 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-07-12 13:54:26.762472 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-07-12 13:54:26.762479 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-07-12 13:54:26.762487 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-07-12 13:54:26.762495 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-07-12 13:54:26.762502 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-07-12 13:54:26.762510 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-07-12 13:54:26.762518 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-07-12 13:54:26.762525 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-07-12 13:54:26.762533 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-07-12 13:54:26.762541 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-07-12 13:54:26.762549 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-07-12 13:54:26.762556 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-07-12 13:54:26.762564 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-07-12 13:54:26.762572 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-07-12 13:54:26.762580 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-07-12 13:54:26.762587 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-07-12 13:54:26.762595 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-07-12 13:54:26.762602 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-07-12 13:54:26.762610 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-07-12 13:54:26.762617 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-07-12 13:54:26.762625 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-12 13:54:26.762643 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-12 13:54:26.762651 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-12 13:54:26.762658 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-07-12 13:54:26.762666 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-12 13:54:26.762673 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-12 13:54:26.762681 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-07-12 13:54:26.762689 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-12 13:54:26.762697 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-12 13:54:26.762704 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-12 13:54:26.762712 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-12 13:54:26.762719 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-12 13:54:26.762727 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-12 13:54:26.762735 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-12 13:54:26.762742 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-12 13:54:26.762750 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-12 13:54:26.762757 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-12 13:54:26.762765 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-12 13:54:26.762772 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-12 13:54:26.762780 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-12 13:54:26.762788 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-12 13:54:26.762795 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-12 13:54:26.762803 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-12 13:54:26.762811 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-12 13:54:26.762818 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-12 13:54:26.762826 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-12 13:54:26.762837 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-12 13:54:26.762845 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-12 13:54:26.762853 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-12 13:54:26.762860 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-12 13:54:26.762868 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-12 13:54:26.762875 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-12 13:54:26.762883 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-12 13:54:26.762895 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-12 13:54:26.762903 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-07-12 13:54:26.762911 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-12 13:54:26.762919 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-07-12 13:54:26.762927 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-12 13:54:26.762934 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-07-12 13:54:26.762942 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-07-12 13:54:26.762950 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-07-12 13:54:26.762963 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-12 13:54:26.762970 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-07-12 13:54:26.762978 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-07-12 13:54:26.762986 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-07-12 13:54:26.762993 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-07-12 13:54:26.763001 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-12 13:54:26.763009 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-07-12 13:54:26.763016 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-07-12 13:54:26.763024 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-07-12 13:54:26.763032 | orchestrator | 2025-07-12 13:54:26.763039 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-07-12 13:54:26.763047 | orchestrator | Saturday 12 July 2025 13:46:04 +0000 (0:00:06.767) 0:03:44.058 ********* 2025-07-12 13:54:26.763055 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.763063 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.763070 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.763078 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.763086 | orchestrator | 2025-07-12 13:54:26.763094 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-07-12 13:54:26.763102 | orchestrator | Saturday 12 July 2025 13:46:05 +0000 (0:00:01.110) 0:03:45.169 ********* 2025-07-12 13:54:26.763110 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-12 13:54:26.763118 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-12 13:54:26.763126 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-12 13:54:26.763133 | orchestrator | 2025-07-12 13:54:26.763141 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-07-12 13:54:26.763149 | orchestrator | Saturday 12 July 2025 13:46:06 +0000 (0:00:00.738) 0:03:45.908 ********* 2025-07-12 13:54:26.763157 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-12 13:54:26.763164 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-12 13:54:26.763172 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-12 13:54:26.763180 | orchestrator | 2025-07-12 13:54:26.763188 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-07-12 13:54:26.763196 | orchestrator | Saturday 12 July 2025 13:46:07 +0000 (0:00:01.587) 0:03:47.496 ********* 2025-07-12 13:54:26.763203 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.763211 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.763219 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.763226 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.763234 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.763242 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.763249 | orchestrator | 2025-07-12 13:54:26.763257 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-07-12 13:54:26.763265 | orchestrator | Saturday 12 July 2025 13:46:08 +0000 (0:00:00.625) 0:03:48.121 ********* 2025-07-12 13:54:26.763272 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.763280 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.763288 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.763300 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.763308 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.763316 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.763323 | orchestrator | 2025-07-12 13:54:26.763331 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-07-12 13:54:26.763342 | orchestrator | Saturday 12 July 2025 13:46:09 +0000 (0:00:00.765) 0:03:48.886 ********* 2025-07-12 13:54:26.763350 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.763358 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.763365 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.763373 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.763381 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.763388 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.763396 | orchestrator | 2025-07-12 13:54:26.763404 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-07-12 13:54:26.763412 | orchestrator | Saturday 12 July 2025 13:46:09 +0000 (0:00:00.590) 0:03:49.477 ********* 2025-07-12 13:54:26.763419 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.763427 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.763439 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.763463 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.763471 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.763479 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.763486 | orchestrator | 2025-07-12 13:54:26.763494 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-07-12 13:54:26.763502 | orchestrator | Saturday 12 July 2025 13:46:10 +0000 (0:00:00.852) 0:03:50.329 ********* 2025-07-12 13:54:26.763509 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.763517 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.763525 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.763532 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.763540 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.763548 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.763555 | orchestrator | 2025-07-12 13:54:26.763563 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-07-12 13:54:26.763571 | orchestrator | Saturday 12 July 2025 13:46:11 +0000 (0:00:00.648) 0:03:50.978 ********* 2025-07-12 13:54:26.763578 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.763586 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.763594 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.763601 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.763609 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.763616 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.763624 | orchestrator | 2025-07-12 13:54:26.763632 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-07-12 13:54:26.763640 | orchestrator | Saturday 12 July 2025 13:46:12 +0000 (0:00:00.947) 0:03:51.926 ********* 2025-07-12 13:54:26.763647 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.763655 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.763662 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.763670 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.763678 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.763685 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.763693 | orchestrator | 2025-07-12 13:54:26.763701 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-07-12 13:54:26.763709 | orchestrator | Saturday 12 July 2025 13:46:13 +0000 (0:00:00.715) 0:03:52.642 ********* 2025-07-12 13:54:26.763716 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.763724 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.763732 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.763739 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.763747 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.763760 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.763767 | orchestrator | 2025-07-12 13:54:26.763775 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-07-12 13:54:26.763783 | orchestrator | Saturday 12 July 2025 13:46:13 +0000 (0:00:00.818) 0:03:53.460 ********* 2025-07-12 13:54:26.763791 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.763798 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.763806 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.763814 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.763821 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.763829 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.763837 | orchestrator | 2025-07-12 13:54:26.763844 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-07-12 13:54:26.763852 | orchestrator | Saturday 12 July 2025 13:46:17 +0000 (0:00:03.154) 0:03:56.615 ********* 2025-07-12 13:54:26.763860 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.763867 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.763875 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.763883 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.763890 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.763898 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.763906 | orchestrator | 2025-07-12 13:54:26.763914 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-07-12 13:54:26.763921 | orchestrator | Saturday 12 July 2025 13:46:17 +0000 (0:00:00.870) 0:03:57.485 ********* 2025-07-12 13:54:26.763929 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.763937 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.763944 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.763952 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.763959 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.763967 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.763975 | orchestrator | 2025-07-12 13:54:26.763983 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-07-12 13:54:26.763990 | orchestrator | Saturday 12 July 2025 13:46:18 +0000 (0:00:00.647) 0:03:58.133 ********* 2025-07-12 13:54:26.763998 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.764006 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.764013 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.764021 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.764028 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.764036 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.764043 | orchestrator | 2025-07-12 13:54:26.764051 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-07-12 13:54:26.764059 | orchestrator | Saturday 12 July 2025 13:46:19 +0000 (0:00:00.819) 0:03:58.952 ********* 2025-07-12 13:54:26.764066 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.764074 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.764082 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.764093 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-12 13:54:26.764101 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-12 13:54:26.764109 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-12 13:54:26.764117 | orchestrator | 2025-07-12 13:54:26.764124 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-07-12 13:54:26.764137 | orchestrator | Saturday 12 July 2025 13:46:20 +0000 (0:00:00.625) 0:03:59.577 ********* 2025-07-12 13:54:26.764145 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.764153 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.764161 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.764175 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-07-12 13:54:26.764185 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-07-12 13:54:26.764194 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.764202 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-07-12 13:54:26.764210 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-07-12 13:54:26.764218 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.764226 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-07-12 13:54:26.764234 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-07-12 13:54:26.764242 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.764249 | orchestrator | 2025-07-12 13:54:26.764257 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-07-12 13:54:26.764265 | orchestrator | Saturday 12 July 2025 13:46:20 +0000 (0:00:00.856) 0:04:00.434 ********* 2025-07-12 13:54:26.764272 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.764280 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.764288 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.764295 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.764303 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.764310 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.764318 | orchestrator | 2025-07-12 13:54:26.764326 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-07-12 13:54:26.764333 | orchestrator | Saturday 12 July 2025 13:46:21 +0000 (0:00:00.652) 0:04:01.087 ********* 2025-07-12 13:54:26.764341 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.764349 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.764356 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.764364 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.764371 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.764379 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.764386 | orchestrator | 2025-07-12 13:54:26.764394 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-07-12 13:54:26.764402 | orchestrator | Saturday 12 July 2025 13:46:22 +0000 (0:00:00.642) 0:04:01.730 ********* 2025-07-12 13:54:26.764410 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.764417 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.764425 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.764432 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.764478 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.764488 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.764495 | orchestrator | 2025-07-12 13:54:26.764503 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-07-12 13:54:26.764511 | orchestrator | Saturday 12 July 2025 13:46:22 +0000 (0:00:00.555) 0:04:02.285 ********* 2025-07-12 13:54:26.764523 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.764531 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.764538 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.764546 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.764553 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.764561 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.764569 | orchestrator | 2025-07-12 13:54:26.764577 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-07-12 13:54:26.764584 | orchestrator | Saturday 12 July 2025 13:46:23 +0000 (0:00:00.710) 0:04:02.995 ********* 2025-07-12 13:54:26.764592 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.764600 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.764608 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.764621 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.764629 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.764636 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.764644 | orchestrator | 2025-07-12 13:54:26.764652 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-07-12 13:54:26.764659 | orchestrator | Saturday 12 July 2025 13:46:24 +0000 (0:00:00.576) 0:04:03.572 ********* 2025-07-12 13:54:26.764667 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.764674 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.764682 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.764690 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.764698 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.764705 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.764713 | orchestrator | 2025-07-12 13:54:26.764720 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-07-12 13:54:26.764728 | orchestrator | Saturday 12 July 2025 13:46:24 +0000 (0:00:00.789) 0:04:04.361 ********* 2025-07-12 13:54:26.764736 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-07-12 13:54:26.764744 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-07-12 13:54:26.764751 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-07-12 13:54:26.764759 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.764766 | orchestrator | 2025-07-12 13:54:26.764774 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-07-12 13:54:26.764782 | orchestrator | Saturday 12 July 2025 13:46:25 +0000 (0:00:00.460) 0:04:04.822 ********* 2025-07-12 13:54:26.764789 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-07-12 13:54:26.764797 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-07-12 13:54:26.764804 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-07-12 13:54:26.764812 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.764820 | orchestrator | 2025-07-12 13:54:26.764827 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-07-12 13:54:26.764859 | orchestrator | Saturday 12 July 2025 13:46:25 +0000 (0:00:00.433) 0:04:05.256 ********* 2025-07-12 13:54:26.764867 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-07-12 13:54:26.764875 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-07-12 13:54:26.764883 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-07-12 13:54:26.764890 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.764898 | orchestrator | 2025-07-12 13:54:26.764906 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-07-12 13:54:26.764914 | orchestrator | Saturday 12 July 2025 13:46:26 +0000 (0:00:00.353) 0:04:05.609 ********* 2025-07-12 13:54:26.764927 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.764934 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.764942 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.764950 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.764958 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.764965 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.764973 | orchestrator | 2025-07-12 13:54:26.764980 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-07-12 13:54:26.764988 | orchestrator | Saturday 12 July 2025 13:46:26 +0000 (0:00:00.558) 0:04:06.168 ********* 2025-07-12 13:54:26.764996 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-07-12 13:54:26.765003 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.765010 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-07-12 13:54:26.765016 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.765023 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-07-12 13:54:26.765029 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.765035 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-07-12 13:54:26.765042 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-07-12 13:54:26.765048 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-07-12 13:54:26.765055 | orchestrator | 2025-07-12 13:54:26.765061 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-07-12 13:54:26.765068 | orchestrator | Saturday 12 July 2025 13:46:28 +0000 (0:00:01.741) 0:04:07.910 ********* 2025-07-12 13:54:26.765074 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:26.765081 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:26.765087 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:26.765094 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:26.765100 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:26.765106 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:26.765113 | orchestrator | 2025-07-12 13:54:26.765119 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-12 13:54:26.765126 | orchestrator | Saturday 12 July 2025 13:46:31 +0000 (0:00:03.024) 0:04:10.934 ********* 2025-07-12 13:54:26.765132 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:26.765139 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:26.765145 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:26.765152 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:26.765158 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:26.765165 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:26.765171 | orchestrator | 2025-07-12 13:54:26.765178 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-07-12 13:54:26.765184 | orchestrator | Saturday 12 July 2025 13:46:32 +0000 (0:00:01.107) 0:04:12.042 ********* 2025-07-12 13:54:26.765191 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.765197 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.765207 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.765214 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:54:26.765220 | orchestrator | 2025-07-12 13:54:26.765227 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-07-12 13:54:26.765233 | orchestrator | Saturday 12 July 2025 13:46:33 +0000 (0:00:01.139) 0:04:13.182 ********* 2025-07-12 13:54:26.765240 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.765300 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.765344 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.765370 | orchestrator | 2025-07-12 13:54:26.765379 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-07-12 13:54:26.765390 | orchestrator | Saturday 12 July 2025 13:46:33 +0000 (0:00:00.337) 0:04:13.520 ********* 2025-07-12 13:54:26.765397 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:26.765403 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:26.765410 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:26.765421 | orchestrator | 2025-07-12 13:54:26.765428 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-07-12 13:54:26.765435 | orchestrator | Saturday 12 July 2025 13:46:35 +0000 (0:00:01.505) 0:04:15.025 ********* 2025-07-12 13:54:26.765441 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 13:54:26.765462 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 13:54:26.765469 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 13:54:26.765476 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.765482 | orchestrator | 2025-07-12 13:54:26.765489 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-07-12 13:54:26.765513 | orchestrator | Saturday 12 July 2025 13:46:36 +0000 (0:00:00.659) 0:04:15.685 ********* 2025-07-12 13:54:26.765520 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.765527 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.765534 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.765540 | orchestrator | 2025-07-12 13:54:26.765546 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-07-12 13:54:26.765553 | orchestrator | Saturday 12 July 2025 13:46:36 +0000 (0:00:00.354) 0:04:16.039 ********* 2025-07-12 13:54:26.765559 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.765566 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.765572 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.765579 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.765585 | orchestrator | 2025-07-12 13:54:26.765592 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-07-12 13:54:26.765598 | orchestrator | Saturday 12 July 2025 13:46:37 +0000 (0:00:01.041) 0:04:17.081 ********* 2025-07-12 13:54:26.765605 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 13:54:26.765611 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 13:54:26.765618 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 13:54:26.765624 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.765631 | orchestrator | 2025-07-12 13:54:26.765637 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-07-12 13:54:26.765644 | orchestrator | Saturday 12 July 2025 13:46:37 +0000 (0:00:00.428) 0:04:17.510 ********* 2025-07-12 13:54:26.765650 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.765657 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.765672 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.765679 | orchestrator | 2025-07-12 13:54:26.765686 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-07-12 13:54:26.765692 | orchestrator | Saturday 12 July 2025 13:46:38 +0000 (0:00:00.329) 0:04:17.839 ********* 2025-07-12 13:54:26.765699 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.765705 | orchestrator | 2025-07-12 13:54:26.765712 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-07-12 13:54:26.765718 | orchestrator | Saturday 12 July 2025 13:46:38 +0000 (0:00:00.237) 0:04:18.077 ********* 2025-07-12 13:54:26.765735 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.765750 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.765757 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.765763 | orchestrator | 2025-07-12 13:54:26.765770 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-07-12 13:54:26.765776 | orchestrator | Saturday 12 July 2025 13:46:38 +0000 (0:00:00.325) 0:04:18.402 ********* 2025-07-12 13:54:26.765783 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.765789 | orchestrator | 2025-07-12 13:54:26.765795 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-07-12 13:54:26.765802 | orchestrator | Saturday 12 July 2025 13:46:39 +0000 (0:00:00.232) 0:04:18.635 ********* 2025-07-12 13:54:26.765808 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.765820 | orchestrator | 2025-07-12 13:54:26.765827 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-07-12 13:54:26.765833 | orchestrator | Saturday 12 July 2025 13:46:39 +0000 (0:00:00.214) 0:04:18.849 ********* 2025-07-12 13:54:26.765840 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.765846 | orchestrator | 2025-07-12 13:54:26.765853 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-07-12 13:54:26.765859 | orchestrator | Saturday 12 July 2025 13:46:39 +0000 (0:00:00.348) 0:04:19.197 ********* 2025-07-12 13:54:26.765865 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.765872 | orchestrator | 2025-07-12 13:54:26.765878 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-07-12 13:54:26.765885 | orchestrator | Saturday 12 July 2025 13:46:39 +0000 (0:00:00.245) 0:04:19.443 ********* 2025-07-12 13:54:26.765891 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.765908 | orchestrator | 2025-07-12 13:54:26.765915 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-07-12 13:54:26.765922 | orchestrator | Saturday 12 July 2025 13:46:40 +0000 (0:00:00.233) 0:04:19.676 ********* 2025-07-12 13:54:26.765932 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 13:54:26.765939 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 13:54:26.765945 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 13:54:26.765952 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.765958 | orchestrator | 2025-07-12 13:54:26.765965 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-07-12 13:54:26.765971 | orchestrator | Saturday 12 July 2025 13:46:40 +0000 (0:00:00.423) 0:04:20.100 ********* 2025-07-12 13:54:26.765978 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.765984 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.765991 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.765997 | orchestrator | 2025-07-12 13:54:26.766009 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-07-12 13:54:26.766090 | orchestrator | Saturday 12 July 2025 13:46:40 +0000 (0:00:00.322) 0:04:20.423 ********* 2025-07-12 13:54:26.766100 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.766106 | orchestrator | 2025-07-12 13:54:26.766113 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-07-12 13:54:26.766119 | orchestrator | Saturday 12 July 2025 13:46:41 +0000 (0:00:00.221) 0:04:20.644 ********* 2025-07-12 13:54:26.766126 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.766132 | orchestrator | 2025-07-12 13:54:26.766139 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-07-12 13:54:26.766145 | orchestrator | Saturday 12 July 2025 13:46:41 +0000 (0:00:00.235) 0:04:20.879 ********* 2025-07-12 13:54:26.766152 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.766158 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.766165 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.766171 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.766178 | orchestrator | 2025-07-12 13:54:26.766184 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-07-12 13:54:26.766191 | orchestrator | Saturday 12 July 2025 13:46:42 +0000 (0:00:01.098) 0:04:21.978 ********* 2025-07-12 13:54:26.766197 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.766204 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.766210 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.766217 | orchestrator | 2025-07-12 13:54:26.766223 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-07-12 13:54:26.766230 | orchestrator | Saturday 12 July 2025 13:46:42 +0000 (0:00:00.353) 0:04:22.331 ********* 2025-07-12 13:54:26.766236 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:26.766242 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:26.766249 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:26.766264 | orchestrator | 2025-07-12 13:54:26.766271 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-07-12 13:54:26.766278 | orchestrator | Saturday 12 July 2025 13:46:44 +0000 (0:00:01.253) 0:04:23.585 ********* 2025-07-12 13:54:26.766284 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 13:54:26.766291 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 13:54:26.766297 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 13:54:26.766304 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.766310 | orchestrator | 2025-07-12 13:54:26.766317 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-07-12 13:54:26.766323 | orchestrator | Saturday 12 July 2025 13:46:45 +0000 (0:00:01.087) 0:04:24.673 ********* 2025-07-12 13:54:26.766330 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.766336 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.766343 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.766349 | orchestrator | 2025-07-12 13:54:26.766356 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-07-12 13:54:26.766362 | orchestrator | Saturday 12 July 2025 13:46:45 +0000 (0:00:00.358) 0:04:25.031 ********* 2025-07-12 13:54:26.766369 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.766375 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.766382 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.766388 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.766395 | orchestrator | 2025-07-12 13:54:26.766401 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-07-12 13:54:26.766408 | orchestrator | Saturday 12 July 2025 13:46:46 +0000 (0:00:01.051) 0:04:26.082 ********* 2025-07-12 13:54:26.766414 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.766421 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.766427 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.766434 | orchestrator | 2025-07-12 13:54:26.766440 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-07-12 13:54:26.766460 | orchestrator | Saturday 12 July 2025 13:46:46 +0000 (0:00:00.343) 0:04:26.425 ********* 2025-07-12 13:54:26.766466 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:26.766473 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:26.766479 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:26.766486 | orchestrator | 2025-07-12 13:54:26.766492 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-07-12 13:54:26.766499 | orchestrator | Saturday 12 July 2025 13:46:48 +0000 (0:00:01.339) 0:04:27.765 ********* 2025-07-12 13:54:26.766506 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 13:54:26.766512 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 13:54:26.766519 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 13:54:26.766525 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.766532 | orchestrator | 2025-07-12 13:54:26.766538 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-07-12 13:54:26.766545 | orchestrator | Saturday 12 July 2025 13:46:49 +0000 (0:00:00.854) 0:04:28.620 ********* 2025-07-12 13:54:26.766552 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.766558 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.766565 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.766571 | orchestrator | 2025-07-12 13:54:26.766582 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-07-12 13:54:26.766589 | orchestrator | Saturday 12 July 2025 13:46:49 +0000 (0:00:00.344) 0:04:28.964 ********* 2025-07-12 13:54:26.766595 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.766602 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.766608 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.766615 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.766626 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.766632 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.766639 | orchestrator | 2025-07-12 13:54:26.766645 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-07-12 13:54:26.766652 | orchestrator | Saturday 12 July 2025 13:46:50 +0000 (0:00:00.825) 0:04:29.790 ********* 2025-07-12 13:54:26.766683 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.766691 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.766698 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.766704 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:54:26.766711 | orchestrator | 2025-07-12 13:54:26.766717 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-07-12 13:54:26.766724 | orchestrator | Saturday 12 July 2025 13:46:51 +0000 (0:00:01.041) 0:04:30.831 ********* 2025-07-12 13:54:26.766730 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.766737 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.766743 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.766750 | orchestrator | 2025-07-12 13:54:26.766756 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-07-12 13:54:26.766763 | orchestrator | Saturday 12 July 2025 13:46:51 +0000 (0:00:00.349) 0:04:31.181 ********* 2025-07-12 13:54:26.766769 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:26.766776 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:26.766782 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:26.766789 | orchestrator | 2025-07-12 13:54:26.766795 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-07-12 13:54:26.766802 | orchestrator | Saturday 12 July 2025 13:46:52 +0000 (0:00:01.178) 0:04:32.359 ********* 2025-07-12 13:54:26.766808 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 13:54:26.766814 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 13:54:26.766821 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 13:54:26.766827 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.766834 | orchestrator | 2025-07-12 13:54:26.766840 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-07-12 13:54:26.766847 | orchestrator | Saturday 12 July 2025 13:46:53 +0000 (0:00:00.855) 0:04:33.215 ********* 2025-07-12 13:54:26.766853 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.766860 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.766866 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.766873 | orchestrator | 2025-07-12 13:54:26.766879 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-07-12 13:54:26.766886 | orchestrator | 2025-07-12 13:54:26.766892 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 13:54:26.766899 | orchestrator | Saturday 12 July 2025 13:46:54 +0000 (0:00:00.829) 0:04:34.044 ********* 2025-07-12 13:54:26.766906 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:54:26.766912 | orchestrator | 2025-07-12 13:54:26.766919 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 13:54:26.766925 | orchestrator | Saturday 12 July 2025 13:46:55 +0000 (0:00:00.572) 0:04:34.617 ********* 2025-07-12 13:54:26.766932 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:54:26.766938 | orchestrator | 2025-07-12 13:54:26.766945 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 13:54:26.766951 | orchestrator | Saturday 12 July 2025 13:46:55 +0000 (0:00:00.810) 0:04:35.427 ********* 2025-07-12 13:54:26.766957 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.766964 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.766970 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.766977 | orchestrator | 2025-07-12 13:54:26.766983 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 13:54:26.766995 | orchestrator | Saturday 12 July 2025 13:46:56 +0000 (0:00:00.844) 0:04:36.273 ********* 2025-07-12 13:54:26.767001 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.767008 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.767014 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.767020 | orchestrator | 2025-07-12 13:54:26.767027 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 13:54:26.767033 | orchestrator | Saturday 12 July 2025 13:46:57 +0000 (0:00:00.381) 0:04:36.654 ********* 2025-07-12 13:54:26.767040 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.767046 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.767053 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.767059 | orchestrator | 2025-07-12 13:54:26.767066 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 13:54:26.767072 | orchestrator | Saturday 12 July 2025 13:46:57 +0000 (0:00:00.330) 0:04:36.984 ********* 2025-07-12 13:54:26.767079 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.767085 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.767092 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.767098 | orchestrator | 2025-07-12 13:54:26.767104 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 13:54:26.767111 | orchestrator | Saturday 12 July 2025 13:46:57 +0000 (0:00:00.536) 0:04:37.520 ********* 2025-07-12 13:54:26.767117 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.767124 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.767130 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.767137 | orchestrator | 2025-07-12 13:54:26.767143 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 13:54:26.767153 | orchestrator | Saturday 12 July 2025 13:46:58 +0000 (0:00:00.748) 0:04:38.269 ********* 2025-07-12 13:54:26.767160 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.767166 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.767172 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.767179 | orchestrator | 2025-07-12 13:54:26.767185 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 13:54:26.767192 | orchestrator | Saturday 12 July 2025 13:46:59 +0000 (0:00:00.294) 0:04:38.563 ********* 2025-07-12 13:54:26.767198 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.767205 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.767211 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.767218 | orchestrator | 2025-07-12 13:54:26.767224 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 13:54:26.767251 | orchestrator | Saturday 12 July 2025 13:46:59 +0000 (0:00:00.313) 0:04:38.877 ********* 2025-07-12 13:54:26.767258 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.767265 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.767272 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.767278 | orchestrator | 2025-07-12 13:54:26.767285 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 13:54:26.767291 | orchestrator | Saturday 12 July 2025 13:47:00 +0000 (0:00:00.974) 0:04:39.851 ********* 2025-07-12 13:54:26.767298 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.767305 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.767311 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.767317 | orchestrator | 2025-07-12 13:54:26.767324 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 13:54:26.767330 | orchestrator | Saturday 12 July 2025 13:47:01 +0000 (0:00:00.741) 0:04:40.593 ********* 2025-07-12 13:54:26.767337 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.767343 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.767350 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.767356 | orchestrator | 2025-07-12 13:54:26.767363 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 13:54:26.767374 | orchestrator | Saturday 12 July 2025 13:47:01 +0000 (0:00:00.334) 0:04:40.927 ********* 2025-07-12 13:54:26.767381 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.767388 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.767394 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.767400 | orchestrator | 2025-07-12 13:54:26.767407 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 13:54:26.767413 | orchestrator | Saturday 12 July 2025 13:47:01 +0000 (0:00:00.316) 0:04:41.243 ********* 2025-07-12 13:54:26.767420 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.767427 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.767433 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.767439 | orchestrator | 2025-07-12 13:54:26.767484 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 13:54:26.767492 | orchestrator | Saturday 12 July 2025 13:47:02 +0000 (0:00:00.576) 0:04:41.820 ********* 2025-07-12 13:54:26.767499 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.767506 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.767512 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.767519 | orchestrator | 2025-07-12 13:54:26.767553 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 13:54:26.767560 | orchestrator | Saturday 12 July 2025 13:47:02 +0000 (0:00:00.335) 0:04:42.155 ********* 2025-07-12 13:54:26.767567 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.767574 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.767580 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.767586 | orchestrator | 2025-07-12 13:54:26.767593 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 13:54:26.767600 | orchestrator | Saturday 12 July 2025 13:47:02 +0000 (0:00:00.322) 0:04:42.477 ********* 2025-07-12 13:54:26.767606 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.767613 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.767619 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.767626 | orchestrator | 2025-07-12 13:54:26.767632 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 13:54:26.767639 | orchestrator | Saturday 12 July 2025 13:47:03 +0000 (0:00:00.291) 0:04:42.769 ********* 2025-07-12 13:54:26.767645 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.767651 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.767657 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.767663 | orchestrator | 2025-07-12 13:54:26.767669 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 13:54:26.767676 | orchestrator | Saturday 12 July 2025 13:47:03 +0000 (0:00:00.589) 0:04:43.359 ********* 2025-07-12 13:54:26.767682 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.767688 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.767693 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.767699 | orchestrator | 2025-07-12 13:54:26.767705 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 13:54:26.767711 | orchestrator | Saturday 12 July 2025 13:47:04 +0000 (0:00:00.334) 0:04:43.693 ********* 2025-07-12 13:54:26.767717 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.767723 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.767729 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.767735 | orchestrator | 2025-07-12 13:54:26.767741 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 13:54:26.767747 | orchestrator | Saturday 12 July 2025 13:47:04 +0000 (0:00:00.336) 0:04:44.029 ********* 2025-07-12 13:54:26.767753 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.767759 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.767765 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.767771 | orchestrator | 2025-07-12 13:54:26.767777 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-07-12 13:54:26.767783 | orchestrator | Saturday 12 July 2025 13:47:05 +0000 (0:00:00.809) 0:04:44.839 ********* 2025-07-12 13:54:26.767794 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.767800 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.767817 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.767823 | orchestrator | 2025-07-12 13:54:26.767830 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-07-12 13:54:26.767854 | orchestrator | Saturday 12 July 2025 13:47:05 +0000 (0:00:00.375) 0:04:45.214 ********* 2025-07-12 13:54:26.767865 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:54:26.767872 | orchestrator | 2025-07-12 13:54:26.767878 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-07-12 13:54:26.767884 | orchestrator | Saturday 12 July 2025 13:47:06 +0000 (0:00:00.573) 0:04:45.788 ********* 2025-07-12 13:54:26.767890 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.767896 | orchestrator | 2025-07-12 13:54:26.767902 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-07-12 13:54:26.767908 | orchestrator | Saturday 12 July 2025 13:47:06 +0000 (0:00:00.152) 0:04:45.940 ********* 2025-07-12 13:54:26.767914 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-07-12 13:54:26.767920 | orchestrator | 2025-07-12 13:54:26.767949 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-07-12 13:54:26.767956 | orchestrator | Saturday 12 July 2025 13:47:08 +0000 (0:00:01.588) 0:04:47.529 ********* 2025-07-12 13:54:26.767962 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.767968 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.767974 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.767980 | orchestrator | 2025-07-12 13:54:26.767986 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-07-12 13:54:26.767992 | orchestrator | Saturday 12 July 2025 13:47:08 +0000 (0:00:00.327) 0:04:47.856 ********* 2025-07-12 13:54:26.767998 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.768004 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.768010 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.768016 | orchestrator | 2025-07-12 13:54:26.768022 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-07-12 13:54:26.768028 | orchestrator | Saturday 12 July 2025 13:47:08 +0000 (0:00:00.346) 0:04:48.203 ********* 2025-07-12 13:54:26.768034 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:26.768040 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:26.768046 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:26.768051 | orchestrator | 2025-07-12 13:54:26.768057 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-07-12 13:54:26.768063 | orchestrator | Saturday 12 July 2025 13:47:09 +0000 (0:00:01.221) 0:04:49.425 ********* 2025-07-12 13:54:26.768070 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:26.768076 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:26.768082 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:26.768087 | orchestrator | 2025-07-12 13:54:26.768093 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-07-12 13:54:26.768099 | orchestrator | Saturday 12 July 2025 13:47:10 +0000 (0:00:01.054) 0:04:50.479 ********* 2025-07-12 13:54:26.768105 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:26.768111 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:26.768117 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:26.768123 | orchestrator | 2025-07-12 13:54:26.768130 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-07-12 13:54:26.768136 | orchestrator | Saturday 12 July 2025 13:47:11 +0000 (0:00:00.691) 0:04:51.171 ********* 2025-07-12 13:54:26.768142 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.768148 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.768154 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.768160 | orchestrator | 2025-07-12 13:54:26.768166 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-07-12 13:54:26.768172 | orchestrator | Saturday 12 July 2025 13:47:12 +0000 (0:00:00.657) 0:04:51.828 ********* 2025-07-12 13:54:26.768183 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:26.768190 | orchestrator | 2025-07-12 13:54:26.768196 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-07-12 13:54:26.768202 | orchestrator | Saturday 12 July 2025 13:47:13 +0000 (0:00:01.311) 0:04:53.139 ********* 2025-07-12 13:54:26.768208 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.768214 | orchestrator | 2025-07-12 13:54:26.768219 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-07-12 13:54:26.768226 | orchestrator | Saturday 12 July 2025 13:47:14 +0000 (0:00:00.717) 0:04:53.856 ********* 2025-07-12 13:54:26.768231 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 13:54:26.768237 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:54:26.768243 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:54:26.768250 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 13:54:26.768256 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-07-12 13:54:26.768262 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 13:54:26.768268 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 13:54:26.768274 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-07-12 13:54:26.768280 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 13:54:26.768286 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-07-12 13:54:26.768292 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-07-12 13:54:26.768298 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-07-12 13:54:26.768303 | orchestrator | 2025-07-12 13:54:26.768310 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-07-12 13:54:26.768315 | orchestrator | Saturday 12 July 2025 13:47:17 +0000 (0:00:03.497) 0:04:57.354 ********* 2025-07-12 13:54:26.768321 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:26.768327 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:26.768333 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:26.768339 | orchestrator | 2025-07-12 13:54:26.768345 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-07-12 13:54:26.768351 | orchestrator | Saturday 12 July 2025 13:47:19 +0000 (0:00:01.461) 0:04:58.816 ********* 2025-07-12 13:54:26.768357 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.768363 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.768369 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.768375 | orchestrator | 2025-07-12 13:54:26.768381 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-07-12 13:54:26.768391 | orchestrator | Saturday 12 July 2025 13:47:19 +0000 (0:00:00.333) 0:04:59.150 ********* 2025-07-12 13:54:26.768397 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.768403 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.768409 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.768415 | orchestrator | 2025-07-12 13:54:26.768421 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-07-12 13:54:26.768427 | orchestrator | Saturday 12 July 2025 13:47:19 +0000 (0:00:00.370) 0:04:59.520 ********* 2025-07-12 13:54:26.768433 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:26.768439 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:26.768458 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:26.768464 | orchestrator | 2025-07-12 13:54:26.768471 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-07-12 13:54:26.768495 | orchestrator | Saturday 12 July 2025 13:47:22 +0000 (0:00:02.509) 0:05:02.029 ********* 2025-07-12 13:54:26.768502 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:26.768508 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:26.768514 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:26.768520 | orchestrator | 2025-07-12 13:54:26.768526 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-07-12 13:54:26.768538 | orchestrator | Saturday 12 July 2025 13:47:24 +0000 (0:00:01.684) 0:05:03.714 ********* 2025-07-12 13:54:26.768544 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.768550 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.768556 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.768562 | orchestrator | 2025-07-12 13:54:26.768568 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-07-12 13:54:26.768574 | orchestrator | Saturday 12 July 2025 13:47:24 +0000 (0:00:00.330) 0:05:04.044 ********* 2025-07-12 13:54:26.768580 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:54:26.768586 | orchestrator | 2025-07-12 13:54:26.768592 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-07-12 13:54:26.768598 | orchestrator | Saturday 12 July 2025 13:47:25 +0000 (0:00:00.609) 0:05:04.653 ********* 2025-07-12 13:54:26.768604 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.768610 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.768616 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.768622 | orchestrator | 2025-07-12 13:54:26.768628 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-07-12 13:54:26.768634 | orchestrator | Saturday 12 July 2025 13:47:25 +0000 (0:00:00.572) 0:05:05.226 ********* 2025-07-12 13:54:26.768640 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.768646 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.768652 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.768658 | orchestrator | 2025-07-12 13:54:26.768664 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-07-12 13:54:26.768670 | orchestrator | Saturday 12 July 2025 13:47:26 +0000 (0:00:00.359) 0:05:05.585 ********* 2025-07-12 13:54:26.768676 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:54:26.768682 | orchestrator | 2025-07-12 13:54:26.768688 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-07-12 13:54:26.768694 | orchestrator | Saturday 12 July 2025 13:47:26 +0000 (0:00:00.540) 0:05:06.126 ********* 2025-07-12 13:54:26.768700 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:26.768706 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:26.768712 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:26.768718 | orchestrator | 2025-07-12 13:54:26.768724 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-07-12 13:54:26.768730 | orchestrator | Saturday 12 July 2025 13:47:28 +0000 (0:00:01.967) 0:05:08.094 ********* 2025-07-12 13:54:26.768736 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:26.768742 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:26.768748 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:26.768754 | orchestrator | 2025-07-12 13:54:26.768760 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-07-12 13:54:26.768766 | orchestrator | Saturday 12 July 2025 13:47:29 +0000 (0:00:01.304) 0:05:09.398 ********* 2025-07-12 13:54:26.768772 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:26.768778 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:26.768784 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:26.768790 | orchestrator | 2025-07-12 13:54:26.768796 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-07-12 13:54:26.768802 | orchestrator | Saturday 12 July 2025 13:47:31 +0000 (0:00:01.680) 0:05:11.078 ********* 2025-07-12 13:54:26.768808 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:26.768814 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:26.768820 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:26.768826 | orchestrator | 2025-07-12 13:54:26.768832 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-07-12 13:54:26.768838 | orchestrator | Saturday 12 July 2025 13:47:34 +0000 (0:00:02.756) 0:05:13.834 ********* 2025-07-12 13:54:26.768844 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:54:26.768854 | orchestrator | 2025-07-12 13:54:26.768860 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-07-12 13:54:26.768866 | orchestrator | Saturday 12 July 2025 13:47:35 +0000 (0:00:00.828) 0:05:14.663 ********* 2025-07-12 13:54:26.768872 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-07-12 13:54:26.768878 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.768884 | orchestrator | 2025-07-12 13:54:26.768890 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-07-12 13:54:26.768896 | orchestrator | Saturday 12 July 2025 13:47:56 +0000 (0:00:21.859) 0:05:36.522 ********* 2025-07-12 13:54:26.768902 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.768908 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.768914 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.768920 | orchestrator | 2025-07-12 13:54:26.768926 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-07-12 13:54:26.768936 | orchestrator | Saturday 12 July 2025 13:48:06 +0000 (0:00:09.050) 0:05:45.573 ********* 2025-07-12 13:54:26.768942 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.768948 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.768954 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.768960 | orchestrator | 2025-07-12 13:54:26.768976 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-07-12 13:54:26.768983 | orchestrator | Saturday 12 July 2025 13:48:06 +0000 (0:00:00.326) 0:05:45.899 ********* 2025-07-12 13:54:26.769009 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5e585953c616f5409e6ad0826a34864df8cc128c'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-07-12 13:54:26.769018 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5e585953c616f5409e6ad0826a34864df8cc128c'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-07-12 13:54:26.769026 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5e585953c616f5409e6ad0826a34864df8cc128c'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-07-12 13:54:26.769033 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5e585953c616f5409e6ad0826a34864df8cc128c'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-07-12 13:54:26.769040 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5e585953c616f5409e6ad0826a34864df8cc128c'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-07-12 13:54:26.769047 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5e585953c616f5409e6ad0826a34864df8cc128c'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__5e585953c616f5409e6ad0826a34864df8cc128c'}])  2025-07-12 13:54:26.769059 | orchestrator | 2025-07-12 13:54:26.769065 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-12 13:54:26.769071 | orchestrator | Saturday 12 July 2025 13:48:21 +0000 (0:00:15.002) 0:06:00.901 ********* 2025-07-12 13:54:26.769077 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.769083 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.769089 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.769095 | orchestrator | 2025-07-12 13:54:26.769101 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-07-12 13:54:26.769107 | orchestrator | Saturday 12 July 2025 13:48:21 +0000 (0:00:00.359) 0:06:01.261 ********* 2025-07-12 13:54:26.769113 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:54:26.769119 | orchestrator | 2025-07-12 13:54:26.769125 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-07-12 13:54:26.769131 | orchestrator | Saturday 12 July 2025 13:48:22 +0000 (0:00:00.853) 0:06:02.114 ********* 2025-07-12 13:54:26.769138 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.769147 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.769157 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.769168 | orchestrator | 2025-07-12 13:54:26.769178 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-07-12 13:54:26.769188 | orchestrator | Saturday 12 July 2025 13:48:22 +0000 (0:00:00.356) 0:06:02.471 ********* 2025-07-12 13:54:26.769195 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.769201 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.769207 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.769213 | orchestrator | 2025-07-12 13:54:26.769219 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-07-12 13:54:26.769225 | orchestrator | Saturday 12 July 2025 13:48:23 +0000 (0:00:00.339) 0:06:02.810 ********* 2025-07-12 13:54:26.769231 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 13:54:26.769237 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 13:54:26.769247 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 13:54:26.769253 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.769259 | orchestrator | 2025-07-12 13:54:26.769265 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-07-12 13:54:26.769271 | orchestrator | Saturday 12 July 2025 13:48:24 +0000 (0:00:00.872) 0:06:03.683 ********* 2025-07-12 13:54:26.769277 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.769283 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.769289 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.769295 | orchestrator | 2025-07-12 13:54:26.769301 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-07-12 13:54:26.769307 | orchestrator | 2025-07-12 13:54:26.769313 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 13:54:26.769342 | orchestrator | Saturday 12 July 2025 13:48:24 +0000 (0:00:00.802) 0:06:04.485 ********* 2025-07-12 13:54:26.769349 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:54:26.769355 | orchestrator | 2025-07-12 13:54:26.769361 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 13:54:26.769367 | orchestrator | Saturday 12 July 2025 13:48:25 +0000 (0:00:00.623) 0:06:05.109 ********* 2025-07-12 13:54:26.769373 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:54:26.769379 | orchestrator | 2025-07-12 13:54:26.769385 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 13:54:26.769396 | orchestrator | Saturday 12 July 2025 13:48:26 +0000 (0:00:00.753) 0:06:05.863 ********* 2025-07-12 13:54:26.769418 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.769425 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.769431 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.769437 | orchestrator | 2025-07-12 13:54:26.769483 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 13:54:26.769496 | orchestrator | Saturday 12 July 2025 13:48:27 +0000 (0:00:00.684) 0:06:06.548 ********* 2025-07-12 13:54:26.769506 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.769516 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.769523 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.769529 | orchestrator | 2025-07-12 13:54:26.769535 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 13:54:26.769541 | orchestrator | Saturday 12 July 2025 13:48:27 +0000 (0:00:00.315) 0:06:06.863 ********* 2025-07-12 13:54:26.769547 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.769553 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.769559 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.769565 | orchestrator | 2025-07-12 13:54:26.769571 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 13:54:26.769577 | orchestrator | Saturday 12 July 2025 13:48:27 +0000 (0:00:00.540) 0:06:07.403 ********* 2025-07-12 13:54:26.769583 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.769589 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.769595 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.769601 | orchestrator | 2025-07-12 13:54:26.769607 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 13:54:26.769614 | orchestrator | Saturday 12 July 2025 13:48:28 +0000 (0:00:00.314) 0:06:07.718 ********* 2025-07-12 13:54:26.769620 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.769626 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.769632 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.769638 | orchestrator | 2025-07-12 13:54:26.769644 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 13:54:26.769650 | orchestrator | Saturday 12 July 2025 13:48:28 +0000 (0:00:00.694) 0:06:08.412 ********* 2025-07-12 13:54:26.769656 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.769662 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.769668 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.769673 | orchestrator | 2025-07-12 13:54:26.769678 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 13:54:26.769684 | orchestrator | Saturday 12 July 2025 13:48:29 +0000 (0:00:00.356) 0:06:08.769 ********* 2025-07-12 13:54:26.769689 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.769694 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.769699 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.769705 | orchestrator | 2025-07-12 13:54:26.769710 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 13:54:26.769715 | orchestrator | Saturday 12 July 2025 13:48:29 +0000 (0:00:00.579) 0:06:09.349 ********* 2025-07-12 13:54:26.769721 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.769726 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.769731 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.769736 | orchestrator | 2025-07-12 13:54:26.769742 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 13:54:26.769747 | orchestrator | Saturday 12 July 2025 13:48:30 +0000 (0:00:00.719) 0:06:10.069 ********* 2025-07-12 13:54:26.769752 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.769758 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.769763 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.769768 | orchestrator | 2025-07-12 13:54:26.769773 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 13:54:26.769779 | orchestrator | Saturday 12 July 2025 13:48:31 +0000 (0:00:00.687) 0:06:10.757 ********* 2025-07-12 13:54:26.769784 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.769794 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.769799 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.769805 | orchestrator | 2025-07-12 13:54:26.769810 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 13:54:26.769815 | orchestrator | Saturday 12 July 2025 13:48:31 +0000 (0:00:00.301) 0:06:11.059 ********* 2025-07-12 13:54:26.769821 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.769826 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.769831 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.769836 | orchestrator | 2025-07-12 13:54:26.769842 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 13:54:26.769850 | orchestrator | Saturday 12 July 2025 13:48:32 +0000 (0:00:00.564) 0:06:11.623 ********* 2025-07-12 13:54:26.769856 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.769861 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.769867 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.769872 | orchestrator | 2025-07-12 13:54:26.769877 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 13:54:26.769883 | orchestrator | Saturday 12 July 2025 13:48:32 +0000 (0:00:00.293) 0:06:11.917 ********* 2025-07-12 13:54:26.769888 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.769893 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.769898 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.769904 | orchestrator | 2025-07-12 13:54:26.769909 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 13:54:26.769935 | orchestrator | Saturday 12 July 2025 13:48:32 +0000 (0:00:00.357) 0:06:12.275 ********* 2025-07-12 13:54:26.769941 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.769947 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.769952 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.769957 | orchestrator | 2025-07-12 13:54:26.769963 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 13:54:26.769968 | orchestrator | Saturday 12 July 2025 13:48:33 +0000 (0:00:00.339) 0:06:12.614 ********* 2025-07-12 13:54:26.769973 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.769979 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.769984 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.769989 | orchestrator | 2025-07-12 13:54:26.769995 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 13:54:26.770000 | orchestrator | Saturday 12 July 2025 13:48:33 +0000 (0:00:00.660) 0:06:13.275 ********* 2025-07-12 13:54:26.770005 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.770010 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.770039 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.770044 | orchestrator | 2025-07-12 13:54:26.770050 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 13:54:26.770055 | orchestrator | Saturday 12 July 2025 13:48:34 +0000 (0:00:00.330) 0:06:13.606 ********* 2025-07-12 13:54:26.770061 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.770066 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.770071 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.770076 | orchestrator | 2025-07-12 13:54:26.770082 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 13:54:26.770087 | orchestrator | Saturday 12 July 2025 13:48:34 +0000 (0:00:00.349) 0:06:13.956 ********* 2025-07-12 13:54:26.770092 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.770098 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.770103 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.770108 | orchestrator | 2025-07-12 13:54:26.770113 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 13:54:26.770119 | orchestrator | Saturday 12 July 2025 13:48:34 +0000 (0:00:00.362) 0:06:14.318 ********* 2025-07-12 13:54:26.770124 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.770129 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.770135 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.770144 | orchestrator | 2025-07-12 13:54:26.770150 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-07-12 13:54:26.770155 | orchestrator | Saturday 12 July 2025 13:48:35 +0000 (0:00:00.822) 0:06:15.141 ********* 2025-07-12 13:54:26.770160 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 13:54:26.770166 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 13:54:26.770171 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 13:54:26.770176 | orchestrator | 2025-07-12 13:54:26.770182 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-07-12 13:54:26.770187 | orchestrator | Saturday 12 July 2025 13:48:36 +0000 (0:00:00.620) 0:06:15.762 ********* 2025-07-12 13:54:26.770192 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:54:26.770198 | orchestrator | 2025-07-12 13:54:26.770203 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-07-12 13:54:26.770208 | orchestrator | Saturday 12 July 2025 13:48:36 +0000 (0:00:00.534) 0:06:16.297 ********* 2025-07-12 13:54:26.770214 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:26.770219 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:26.770224 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:26.770230 | orchestrator | 2025-07-12 13:54:26.770235 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-07-12 13:54:26.770240 | orchestrator | Saturday 12 July 2025 13:48:37 +0000 (0:00:00.956) 0:06:17.254 ********* 2025-07-12 13:54:26.770245 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.770251 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.770256 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.770261 | orchestrator | 2025-07-12 13:54:26.770267 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-07-12 13:54:26.770272 | orchestrator | Saturday 12 July 2025 13:48:38 +0000 (0:00:00.385) 0:06:17.639 ********* 2025-07-12 13:54:26.770277 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 13:54:26.770283 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 13:54:26.770288 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 13:54:26.770293 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-07-12 13:54:26.770299 | orchestrator | 2025-07-12 13:54:26.770304 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-07-12 13:54:26.770309 | orchestrator | Saturday 12 July 2025 13:48:48 +0000 (0:00:10.334) 0:06:27.973 ********* 2025-07-12 13:54:26.770315 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.770320 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.770325 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.770330 | orchestrator | 2025-07-12 13:54:26.770336 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-07-12 13:54:26.770341 | orchestrator | Saturday 12 July 2025 13:48:48 +0000 (0:00:00.329) 0:06:28.303 ********* 2025-07-12 13:54:26.770349 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-07-12 13:54:26.770355 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-12 13:54:26.770360 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-12 13:54:26.770366 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-07-12 13:54:26.770371 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:54:26.770376 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:54:26.770382 | orchestrator | 2025-07-12 13:54:26.770387 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-07-12 13:54:26.770393 | orchestrator | Saturday 12 July 2025 13:48:51 +0000 (0:00:02.402) 0:06:30.705 ********* 2025-07-12 13:54:26.770415 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-07-12 13:54:26.770422 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-12 13:54:26.770431 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-12 13:54:26.770436 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 13:54:26.770441 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-07-12 13:54:26.770462 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-07-12 13:54:26.770468 | orchestrator | 2025-07-12 13:54:26.770473 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-07-12 13:54:26.770479 | orchestrator | Saturday 12 July 2025 13:48:52 +0000 (0:00:01.564) 0:06:32.270 ********* 2025-07-12 13:54:26.770484 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.770489 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.770495 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.770500 | orchestrator | 2025-07-12 13:54:26.770505 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-07-12 13:54:26.770511 | orchestrator | Saturday 12 July 2025 13:48:53 +0000 (0:00:00.724) 0:06:32.994 ********* 2025-07-12 13:54:26.770516 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.770521 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.770526 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.770531 | orchestrator | 2025-07-12 13:54:26.770537 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-07-12 13:54:26.770542 | orchestrator | Saturday 12 July 2025 13:48:53 +0000 (0:00:00.326) 0:06:33.320 ********* 2025-07-12 13:54:26.770547 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.770553 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.770558 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.770563 | orchestrator | 2025-07-12 13:54:26.770568 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-07-12 13:54:26.770574 | orchestrator | Saturday 12 July 2025 13:48:54 +0000 (0:00:00.320) 0:06:33.640 ********* 2025-07-12 13:54:26.770579 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:54:26.770584 | orchestrator | 2025-07-12 13:54:26.770590 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-07-12 13:54:26.770595 | orchestrator | Saturday 12 July 2025 13:48:54 +0000 (0:00:00.795) 0:06:34.436 ********* 2025-07-12 13:54:26.770600 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.770605 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.770611 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.770616 | orchestrator | 2025-07-12 13:54:26.770621 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-07-12 13:54:26.770626 | orchestrator | Saturday 12 July 2025 13:48:55 +0000 (0:00:00.414) 0:06:34.850 ********* 2025-07-12 13:54:26.770632 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.770637 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.770642 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.770648 | orchestrator | 2025-07-12 13:54:26.770653 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-07-12 13:54:26.770658 | orchestrator | Saturday 12 July 2025 13:48:55 +0000 (0:00:00.414) 0:06:35.265 ********* 2025-07-12 13:54:26.770664 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:54:26.770669 | orchestrator | 2025-07-12 13:54:26.770674 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-07-12 13:54:26.770679 | orchestrator | Saturday 12 July 2025 13:48:56 +0000 (0:00:00.911) 0:06:36.176 ********* 2025-07-12 13:54:26.770685 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:26.770690 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:26.770695 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:26.770700 | orchestrator | 2025-07-12 13:54:26.770706 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-07-12 13:54:26.770711 | orchestrator | Saturday 12 July 2025 13:48:57 +0000 (0:00:01.218) 0:06:37.394 ********* 2025-07-12 13:54:26.770716 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:26.770726 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:26.770731 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:26.770736 | orchestrator | 2025-07-12 13:54:26.770741 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-07-12 13:54:26.770747 | orchestrator | Saturday 12 July 2025 13:48:59 +0000 (0:00:01.146) 0:06:38.541 ********* 2025-07-12 13:54:26.770752 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:26.770757 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:26.770762 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:26.770768 | orchestrator | 2025-07-12 13:54:26.770773 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-07-12 13:54:26.770778 | orchestrator | Saturday 12 July 2025 13:49:01 +0000 (0:00:02.144) 0:06:40.685 ********* 2025-07-12 13:54:26.770784 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:26.770789 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:26.770794 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:26.770799 | orchestrator | 2025-07-12 13:54:26.770805 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-07-12 13:54:26.770810 | orchestrator | Saturday 12 July 2025 13:49:03 +0000 (0:00:01.890) 0:06:42.576 ********* 2025-07-12 13:54:26.770815 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.770821 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.770829 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-07-12 13:54:26.770834 | orchestrator | 2025-07-12 13:54:26.770840 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-07-12 13:54:26.770845 | orchestrator | Saturday 12 July 2025 13:49:03 +0000 (0:00:00.453) 0:06:43.030 ********* 2025-07-12 13:54:26.770850 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-07-12 13:54:26.770856 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-07-12 13:54:26.770880 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-07-12 13:54:26.770886 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-07-12 13:54:26.770892 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-07-12 13:54:26.770897 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-07-12 13:54:26.770902 | orchestrator | 2025-07-12 13:54:26.770908 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-07-12 13:54:26.770913 | orchestrator | Saturday 12 July 2025 13:49:33 +0000 (0:00:29.871) 0:07:12.901 ********* 2025-07-12 13:54:26.770918 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-07-12 13:54:26.770923 | orchestrator | 2025-07-12 13:54:26.770929 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-07-12 13:54:26.770934 | orchestrator | Saturday 12 July 2025 13:49:34 +0000 (0:00:01.515) 0:07:14.417 ********* 2025-07-12 13:54:26.770939 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.770945 | orchestrator | 2025-07-12 13:54:26.770950 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-07-12 13:54:26.770955 | orchestrator | Saturday 12 July 2025 13:49:35 +0000 (0:00:00.886) 0:07:15.303 ********* 2025-07-12 13:54:26.770960 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.770966 | orchestrator | 2025-07-12 13:54:26.770971 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-07-12 13:54:26.770976 | orchestrator | Saturday 12 July 2025 13:49:35 +0000 (0:00:00.150) 0:07:15.454 ********* 2025-07-12 13:54:26.770981 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-07-12 13:54:26.770987 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-07-12 13:54:26.770992 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-07-12 13:54:26.771001 | orchestrator | 2025-07-12 13:54:26.771006 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-07-12 13:54:26.771012 | orchestrator | Saturday 12 July 2025 13:49:42 +0000 (0:00:06.193) 0:07:21.647 ********* 2025-07-12 13:54:26.771017 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-07-12 13:54:26.771022 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-07-12 13:54:26.771028 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-07-12 13:54:26.771033 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-07-12 13:54:26.771038 | orchestrator | 2025-07-12 13:54:26.771043 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-12 13:54:26.771049 | orchestrator | Saturday 12 July 2025 13:49:46 +0000 (0:00:04.760) 0:07:26.408 ********* 2025-07-12 13:54:26.771054 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:26.771059 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:26.771064 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:26.771070 | orchestrator | 2025-07-12 13:54:26.771075 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-07-12 13:54:26.771080 | orchestrator | Saturday 12 July 2025 13:49:47 +0000 (0:00:00.946) 0:07:27.354 ********* 2025-07-12 13:54:26.771086 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:54:26.771091 | orchestrator | 2025-07-12 13:54:26.771096 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-07-12 13:54:26.771101 | orchestrator | Saturday 12 July 2025 13:49:48 +0000 (0:00:00.536) 0:07:27.891 ********* 2025-07-12 13:54:26.771107 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.771112 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.771117 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.771122 | orchestrator | 2025-07-12 13:54:26.771128 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-07-12 13:54:26.771133 | orchestrator | Saturday 12 July 2025 13:49:48 +0000 (0:00:00.372) 0:07:28.263 ********* 2025-07-12 13:54:26.771138 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:26.771144 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:26.771149 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:26.771154 | orchestrator | 2025-07-12 13:54:26.771159 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-07-12 13:54:26.771165 | orchestrator | Saturday 12 July 2025 13:49:50 +0000 (0:00:01.473) 0:07:29.737 ********* 2025-07-12 13:54:26.771170 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 13:54:26.771175 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 13:54:26.771180 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 13:54:26.771186 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.771191 | orchestrator | 2025-07-12 13:54:26.771196 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-07-12 13:54:26.771201 | orchestrator | Saturday 12 July 2025 13:49:50 +0000 (0:00:00.595) 0:07:30.333 ********* 2025-07-12 13:54:26.771207 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.771212 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.771220 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.771225 | orchestrator | 2025-07-12 13:54:26.771231 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-07-12 13:54:26.771236 | orchestrator | 2025-07-12 13:54:26.771241 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 13:54:26.771246 | orchestrator | Saturday 12 July 2025 13:49:51 +0000 (0:00:00.534) 0:07:30.868 ********* 2025-07-12 13:54:26.771252 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.771257 | orchestrator | 2025-07-12 13:54:26.771266 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 13:54:26.771288 | orchestrator | Saturday 12 July 2025 13:49:52 +0000 (0:00:00.750) 0:07:31.619 ********* 2025-07-12 13:54:26.771294 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.771300 | orchestrator | 2025-07-12 13:54:26.771305 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 13:54:26.771310 | orchestrator | Saturday 12 July 2025 13:49:52 +0000 (0:00:00.504) 0:07:32.124 ********* 2025-07-12 13:54:26.771315 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.771321 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.771326 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.771331 | orchestrator | 2025-07-12 13:54:26.771337 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 13:54:26.771342 | orchestrator | Saturday 12 July 2025 13:49:52 +0000 (0:00:00.335) 0:07:32.459 ********* 2025-07-12 13:54:26.771347 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.771352 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.771358 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.771363 | orchestrator | 2025-07-12 13:54:26.771368 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 13:54:26.771373 | orchestrator | Saturday 12 July 2025 13:49:53 +0000 (0:00:00.933) 0:07:33.392 ********* 2025-07-12 13:54:26.771379 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.771384 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.771389 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.771394 | orchestrator | 2025-07-12 13:54:26.771400 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 13:54:26.771405 | orchestrator | Saturday 12 July 2025 13:49:54 +0000 (0:00:00.707) 0:07:34.100 ********* 2025-07-12 13:54:26.771410 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.771415 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.771421 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.771426 | orchestrator | 2025-07-12 13:54:26.771431 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 13:54:26.771436 | orchestrator | Saturday 12 July 2025 13:49:55 +0000 (0:00:00.748) 0:07:34.849 ********* 2025-07-12 13:54:26.771442 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.771463 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.771468 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.771474 | orchestrator | 2025-07-12 13:54:26.771479 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 13:54:26.771484 | orchestrator | Saturday 12 July 2025 13:49:55 +0000 (0:00:00.302) 0:07:35.151 ********* 2025-07-12 13:54:26.771490 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.771495 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.771500 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.771505 | orchestrator | 2025-07-12 13:54:26.771511 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 13:54:26.771516 | orchestrator | Saturday 12 July 2025 13:49:56 +0000 (0:00:00.562) 0:07:35.714 ********* 2025-07-12 13:54:26.771521 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.771526 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.771532 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.771537 | orchestrator | 2025-07-12 13:54:26.771542 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 13:54:26.771547 | orchestrator | Saturday 12 July 2025 13:49:56 +0000 (0:00:00.325) 0:07:36.040 ********* 2025-07-12 13:54:26.771553 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.771558 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.771563 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.771568 | orchestrator | 2025-07-12 13:54:26.771574 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 13:54:26.771579 | orchestrator | Saturday 12 July 2025 13:49:57 +0000 (0:00:00.689) 0:07:36.729 ********* 2025-07-12 13:54:26.771588 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.771594 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.771599 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.771604 | orchestrator | 2025-07-12 13:54:26.771610 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 13:54:26.771615 | orchestrator | Saturday 12 July 2025 13:49:57 +0000 (0:00:00.794) 0:07:37.524 ********* 2025-07-12 13:54:26.771620 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.771625 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.771631 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.771636 | orchestrator | 2025-07-12 13:54:26.771641 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 13:54:26.771647 | orchestrator | Saturday 12 July 2025 13:49:58 +0000 (0:00:00.643) 0:07:38.167 ********* 2025-07-12 13:54:26.771652 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.771657 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.771662 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.771668 | orchestrator | 2025-07-12 13:54:26.771673 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 13:54:26.771678 | orchestrator | Saturday 12 July 2025 13:49:58 +0000 (0:00:00.313) 0:07:38.480 ********* 2025-07-12 13:54:26.771683 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.771689 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.771694 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.771699 | orchestrator | 2025-07-12 13:54:26.771704 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 13:54:26.771710 | orchestrator | Saturday 12 July 2025 13:49:59 +0000 (0:00:00.329) 0:07:38.810 ********* 2025-07-12 13:54:26.771718 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.771724 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.771729 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.771734 | orchestrator | 2025-07-12 13:54:26.771739 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 13:54:26.771745 | orchestrator | Saturday 12 July 2025 13:49:59 +0000 (0:00:00.315) 0:07:39.125 ********* 2025-07-12 13:54:26.771750 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.771755 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.771760 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.771766 | orchestrator | 2025-07-12 13:54:26.771771 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 13:54:26.771776 | orchestrator | Saturday 12 July 2025 13:50:00 +0000 (0:00:00.588) 0:07:39.714 ********* 2025-07-12 13:54:26.771785 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.771790 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.771795 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.771801 | orchestrator | 2025-07-12 13:54:26.771806 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 13:54:26.771811 | orchestrator | Saturday 12 July 2025 13:50:00 +0000 (0:00:00.345) 0:07:40.060 ********* 2025-07-12 13:54:26.771816 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.771822 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.771827 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.771832 | orchestrator | 2025-07-12 13:54:26.771838 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 13:54:26.771843 | orchestrator | Saturday 12 July 2025 13:50:00 +0000 (0:00:00.403) 0:07:40.463 ********* 2025-07-12 13:54:26.771848 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.771853 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.771859 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.771864 | orchestrator | 2025-07-12 13:54:26.771869 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 13:54:26.771875 | orchestrator | Saturday 12 July 2025 13:50:01 +0000 (0:00:00.314) 0:07:40.777 ********* 2025-07-12 13:54:26.771880 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.771889 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.771894 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.771899 | orchestrator | 2025-07-12 13:54:26.771905 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 13:54:26.771910 | orchestrator | Saturday 12 July 2025 13:50:01 +0000 (0:00:00.612) 0:07:41.390 ********* 2025-07-12 13:54:26.771915 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.771921 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.771926 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.771931 | orchestrator | 2025-07-12 13:54:26.771936 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-07-12 13:54:26.771942 | orchestrator | Saturday 12 July 2025 13:50:02 +0000 (0:00:00.591) 0:07:41.981 ********* 2025-07-12 13:54:26.771947 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.771952 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.771957 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.771962 | orchestrator | 2025-07-12 13:54:26.771968 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-07-12 13:54:26.771973 | orchestrator | Saturday 12 July 2025 13:50:02 +0000 (0:00:00.315) 0:07:42.297 ********* 2025-07-12 13:54:26.771978 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-12 13:54:26.771984 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 13:54:26.771989 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 13:54:26.771994 | orchestrator | 2025-07-12 13:54:26.772000 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-07-12 13:54:26.772005 | orchestrator | Saturday 12 July 2025 13:50:03 +0000 (0:00:00.955) 0:07:43.252 ********* 2025-07-12 13:54:26.772010 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.772015 | orchestrator | 2025-07-12 13:54:26.772021 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-07-12 13:54:26.772026 | orchestrator | Saturday 12 July 2025 13:50:04 +0000 (0:00:00.797) 0:07:44.049 ********* 2025-07-12 13:54:26.772031 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.772037 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.772042 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.772047 | orchestrator | 2025-07-12 13:54:26.772053 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-07-12 13:54:26.772058 | orchestrator | Saturday 12 July 2025 13:50:04 +0000 (0:00:00.311) 0:07:44.361 ********* 2025-07-12 13:54:26.772063 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.772068 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.772074 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.772079 | orchestrator | 2025-07-12 13:54:26.772084 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-07-12 13:54:26.772090 | orchestrator | Saturday 12 July 2025 13:50:05 +0000 (0:00:00.313) 0:07:44.675 ********* 2025-07-12 13:54:26.772095 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.772100 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.772105 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.772111 | orchestrator | 2025-07-12 13:54:26.772116 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-07-12 13:54:26.772121 | orchestrator | Saturday 12 July 2025 13:50:06 +0000 (0:00:00.912) 0:07:45.587 ********* 2025-07-12 13:54:26.772126 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.772132 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.772137 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.772142 | orchestrator | 2025-07-12 13:54:26.772147 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-07-12 13:54:26.772152 | orchestrator | Saturday 12 July 2025 13:50:06 +0000 (0:00:00.332) 0:07:45.920 ********* 2025-07-12 13:54:26.772158 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-07-12 13:54:26.772171 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-07-12 13:54:26.772176 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-07-12 13:54:26.772182 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-07-12 13:54:26.772187 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-07-12 13:54:26.772192 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-07-12 13:54:26.772198 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-07-12 13:54:26.772207 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-07-12 13:54:26.772213 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-07-12 13:54:26.772218 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-07-12 13:54:26.772223 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-07-12 13:54:26.772229 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-07-12 13:54:26.772234 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-07-12 13:54:26.772239 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-07-12 13:54:26.772245 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-07-12 13:54:26.772250 | orchestrator | 2025-07-12 13:54:26.772255 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-07-12 13:54:26.772261 | orchestrator | Saturday 12 July 2025 13:50:08 +0000 (0:00:01.976) 0:07:47.896 ********* 2025-07-12 13:54:26.772266 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.772271 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.772277 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.772282 | orchestrator | 2025-07-12 13:54:26.772287 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-07-12 13:54:26.772292 | orchestrator | Saturday 12 July 2025 13:50:08 +0000 (0:00:00.304) 0:07:48.200 ********* 2025-07-12 13:54:26.772298 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.772303 | orchestrator | 2025-07-12 13:54:26.772308 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-07-12 13:54:26.772314 | orchestrator | Saturday 12 July 2025 13:50:09 +0000 (0:00:00.819) 0:07:49.019 ********* 2025-07-12 13:54:26.772319 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-07-12 13:54:26.772324 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-07-12 13:54:26.772330 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-07-12 13:54:26.772335 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-07-12 13:54:26.772340 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-07-12 13:54:26.772345 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-07-12 13:54:26.772350 | orchestrator | 2025-07-12 13:54:26.772356 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-07-12 13:54:26.772361 | orchestrator | Saturday 12 July 2025 13:50:10 +0000 (0:00:00.994) 0:07:50.014 ********* 2025-07-12 13:54:26.772367 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:54:26.772372 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-12 13:54:26.772377 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 13:54:26.772382 | orchestrator | 2025-07-12 13:54:26.772388 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-07-12 13:54:26.772397 | orchestrator | Saturday 12 July 2025 13:50:12 +0000 (0:00:01.978) 0:07:51.992 ********* 2025-07-12 13:54:26.772402 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 13:54:26.772407 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-12 13:54:26.772413 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:26.772418 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 13:54:26.772423 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-07-12 13:54:26.772428 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:26.772434 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 13:54:26.772439 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-07-12 13:54:26.772462 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:26.772468 | orchestrator | 2025-07-12 13:54:26.772473 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-07-12 13:54:26.772479 | orchestrator | Saturday 12 July 2025 13:50:13 +0000 (0:00:01.501) 0:07:53.494 ********* 2025-07-12 13:54:26.772484 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-12 13:54:26.772489 | orchestrator | 2025-07-12 13:54:26.772494 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-07-12 13:54:26.772500 | orchestrator | Saturday 12 July 2025 13:50:16 +0000 (0:00:02.071) 0:07:55.565 ********* 2025-07-12 13:54:26.772505 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.772510 | orchestrator | 2025-07-12 13:54:26.772516 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-07-12 13:54:26.772521 | orchestrator | Saturday 12 July 2025 13:50:16 +0000 (0:00:00.583) 0:07:56.148 ********* 2025-07-12 13:54:26.772529 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-76cf46ce-80cb-5d18-8384-c0838affc5b6', 'data_vg': 'ceph-76cf46ce-80cb-5d18-8384-c0838affc5b6'}) 2025-07-12 13:54:26.772535 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8be3c046-75c4-5df6-b59b-0076bb3a4ccd', 'data_vg': 'ceph-8be3c046-75c4-5df6-b59b-0076bb3a4ccd'}) 2025-07-12 13:54:26.772540 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f86cb3d6-0e78-5b6a-8369-843476bf59dc', 'data_vg': 'ceph-f86cb3d6-0e78-5b6a-8369-843476bf59dc'}) 2025-07-12 13:54:26.772549 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-465622e3-903d-5505-a41f-76599f0f3897', 'data_vg': 'ceph-465622e3-903d-5505-a41f-76599f0f3897'}) 2025-07-12 13:54:26.772555 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42', 'data_vg': 'ceph-f8ec8ce8-a083-5a5f-ae06-780cf5acbe42'}) 2025-07-12 13:54:26.772560 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a', 'data_vg': 'ceph-8c07aa4b-79b5-5c8f-bb7a-3f1e0dfe1f2a'}) 2025-07-12 13:54:26.772566 | orchestrator | 2025-07-12 13:54:26.772571 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-07-12 13:54:26.772576 | orchestrator | Saturday 12 July 2025 13:51:02 +0000 (0:00:45.480) 0:08:41.629 ********* 2025-07-12 13:54:26.772582 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.772587 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.772592 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.772598 | orchestrator | 2025-07-12 13:54:26.772603 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-07-12 13:54:26.772608 | orchestrator | Saturday 12 July 2025 13:51:02 +0000 (0:00:00.540) 0:08:42.170 ********* 2025-07-12 13:54:26.772614 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.772619 | orchestrator | 2025-07-12 13:54:26.772624 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-07-12 13:54:26.772630 | orchestrator | Saturday 12 July 2025 13:51:03 +0000 (0:00:00.666) 0:08:42.836 ********* 2025-07-12 13:54:26.772635 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.772644 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.772650 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.772655 | orchestrator | 2025-07-12 13:54:26.772660 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-07-12 13:54:26.772666 | orchestrator | Saturday 12 July 2025 13:51:03 +0000 (0:00:00.679) 0:08:43.516 ********* 2025-07-12 13:54:26.772671 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.772676 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.772682 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.772687 | orchestrator | 2025-07-12 13:54:26.772692 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-07-12 13:54:26.772698 | orchestrator | Saturday 12 July 2025 13:51:06 +0000 (0:00:02.943) 0:08:46.459 ********* 2025-07-12 13:54:26.772703 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.772708 | orchestrator | 2025-07-12 13:54:26.772714 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-07-12 13:54:26.772719 | orchestrator | Saturday 12 July 2025 13:51:07 +0000 (0:00:00.540) 0:08:47.000 ********* 2025-07-12 13:54:26.772724 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:26.772730 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:26.772735 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:26.772740 | orchestrator | 2025-07-12 13:54:26.772745 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-07-12 13:54:26.772751 | orchestrator | Saturday 12 July 2025 13:51:08 +0000 (0:00:01.249) 0:08:48.249 ********* 2025-07-12 13:54:26.772756 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:26.772761 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:26.772767 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:26.772772 | orchestrator | 2025-07-12 13:54:26.772777 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-07-12 13:54:26.772783 | orchestrator | Saturday 12 July 2025 13:51:10 +0000 (0:00:01.389) 0:08:49.639 ********* 2025-07-12 13:54:26.772788 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:26.772793 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:26.772798 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:26.772803 | orchestrator | 2025-07-12 13:54:26.772809 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-07-12 13:54:26.772814 | orchestrator | Saturday 12 July 2025 13:51:11 +0000 (0:00:01.684) 0:08:51.323 ********* 2025-07-12 13:54:26.772819 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.772825 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.772830 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.772835 | orchestrator | 2025-07-12 13:54:26.772840 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-07-12 13:54:26.772846 | orchestrator | Saturday 12 July 2025 13:51:12 +0000 (0:00:00.324) 0:08:51.648 ********* 2025-07-12 13:54:26.772851 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.772856 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.772861 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.772866 | orchestrator | 2025-07-12 13:54:26.772872 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-07-12 13:54:26.772877 | orchestrator | Saturday 12 July 2025 13:51:12 +0000 (0:00:00.319) 0:08:51.968 ********* 2025-07-12 13:54:26.772882 | orchestrator | ok: [testbed-node-3] => (item=5) 2025-07-12 13:54:26.772888 | orchestrator | ok: [testbed-node-4] => (item=4) 2025-07-12 13:54:26.772893 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-07-12 13:54:26.772898 | orchestrator | ok: [testbed-node-3] => (item=1) 2025-07-12 13:54:26.772903 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-07-12 13:54:26.772909 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-07-12 13:54:26.772914 | orchestrator | 2025-07-12 13:54:26.772919 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-07-12 13:54:26.772927 | orchestrator | Saturday 12 July 2025 13:51:13 +0000 (0:00:01.304) 0:08:53.272 ********* 2025-07-12 13:54:26.772936 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-07-12 13:54:26.772942 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-07-12 13:54:26.772947 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-07-12 13:54:26.772952 | orchestrator | changed: [testbed-node-3] => (item=1) 2025-07-12 13:54:26.772957 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-07-12 13:54:26.772963 | orchestrator | changed: [testbed-node-4] => (item=0) 2025-07-12 13:54:26.772968 | orchestrator | 2025-07-12 13:54:26.772973 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-07-12 13:54:26.772982 | orchestrator | Saturday 12 July 2025 13:51:15 +0000 (0:00:02.191) 0:08:55.464 ********* 2025-07-12 13:54:26.772987 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-07-12 13:54:26.772993 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-07-12 13:54:26.772998 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-07-12 13:54:26.773003 | orchestrator | changed: [testbed-node-3] => (item=1) 2025-07-12 13:54:26.773009 | orchestrator | changed: [testbed-node-4] => (item=0) 2025-07-12 13:54:26.773014 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-07-12 13:54:26.773019 | orchestrator | 2025-07-12 13:54:26.773025 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-07-12 13:54:26.773030 | orchestrator | Saturday 12 July 2025 13:51:19 +0000 (0:00:03.485) 0:08:58.949 ********* 2025-07-12 13:54:26.773035 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.773040 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.773046 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-07-12 13:54:26.773051 | orchestrator | 2025-07-12 13:54:26.773056 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-07-12 13:54:26.773062 | orchestrator | Saturday 12 July 2025 13:51:21 +0000 (0:00:02.446) 0:09:01.396 ********* 2025-07-12 13:54:26.773067 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.773072 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.773078 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-07-12 13:54:26.773083 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-07-12 13:54:26.773089 | orchestrator | 2025-07-12 13:54:26.773094 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-07-12 13:54:26.773099 | orchestrator | Saturday 12 July 2025 13:51:34 +0000 (0:00:13.002) 0:09:14.398 ********* 2025-07-12 13:54:26.773105 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.773110 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.773115 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.773121 | orchestrator | 2025-07-12 13:54:26.773126 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-12 13:54:26.773131 | orchestrator | Saturday 12 July 2025 13:51:35 +0000 (0:00:00.827) 0:09:15.226 ********* 2025-07-12 13:54:26.773137 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.773142 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.773147 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.773152 | orchestrator | 2025-07-12 13:54:26.773158 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-07-12 13:54:26.773163 | orchestrator | Saturday 12 July 2025 13:51:36 +0000 (0:00:00.596) 0:09:15.822 ********* 2025-07-12 13:54:26.773168 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.773174 | orchestrator | 2025-07-12 13:54:26.773179 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-07-12 13:54:26.773184 | orchestrator | Saturday 12 July 2025 13:51:36 +0000 (0:00:00.516) 0:09:16.338 ********* 2025-07-12 13:54:26.773190 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 13:54:26.773195 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 13:54:26.773204 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 13:54:26.773209 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.773214 | orchestrator | 2025-07-12 13:54:26.773220 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-07-12 13:54:26.773225 | orchestrator | Saturday 12 July 2025 13:51:37 +0000 (0:00:00.383) 0:09:16.722 ********* 2025-07-12 13:54:26.773230 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.773236 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.773241 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.773246 | orchestrator | 2025-07-12 13:54:26.773251 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-07-12 13:54:26.773257 | orchestrator | Saturday 12 July 2025 13:51:37 +0000 (0:00:00.315) 0:09:17.038 ********* 2025-07-12 13:54:26.773262 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.773267 | orchestrator | 2025-07-12 13:54:26.773273 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-07-12 13:54:26.773278 | orchestrator | Saturday 12 July 2025 13:51:37 +0000 (0:00:00.210) 0:09:17.248 ********* 2025-07-12 13:54:26.773283 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.773288 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.773294 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.773299 | orchestrator | 2025-07-12 13:54:26.773304 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-07-12 13:54:26.773309 | orchestrator | Saturday 12 July 2025 13:51:38 +0000 (0:00:00.585) 0:09:17.833 ********* 2025-07-12 13:54:26.773315 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.773320 | orchestrator | 2025-07-12 13:54:26.773325 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-07-12 13:54:26.773331 | orchestrator | Saturday 12 July 2025 13:51:38 +0000 (0:00:00.230) 0:09:18.064 ********* 2025-07-12 13:54:26.773336 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.773341 | orchestrator | 2025-07-12 13:54:26.773346 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-07-12 13:54:26.773352 | orchestrator | Saturday 12 July 2025 13:51:38 +0000 (0:00:00.230) 0:09:18.294 ********* 2025-07-12 13:54:26.773360 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.773365 | orchestrator | 2025-07-12 13:54:26.773371 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-07-12 13:54:26.773376 | orchestrator | Saturday 12 July 2025 13:51:38 +0000 (0:00:00.131) 0:09:18.426 ********* 2025-07-12 13:54:26.773381 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.773386 | orchestrator | 2025-07-12 13:54:26.773392 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-07-12 13:54:26.773397 | orchestrator | Saturday 12 July 2025 13:51:39 +0000 (0:00:00.222) 0:09:18.648 ********* 2025-07-12 13:54:26.773402 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.773407 | orchestrator | 2025-07-12 13:54:26.773416 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-07-12 13:54:26.773421 | orchestrator | Saturday 12 July 2025 13:51:39 +0000 (0:00:00.217) 0:09:18.865 ********* 2025-07-12 13:54:26.773427 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 13:54:26.773432 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 13:54:26.773437 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 13:54:26.773475 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.773486 | orchestrator | 2025-07-12 13:54:26.773495 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-07-12 13:54:26.773503 | orchestrator | Saturday 12 July 2025 13:51:39 +0000 (0:00:00.385) 0:09:19.251 ********* 2025-07-12 13:54:26.773513 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.773518 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.773523 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.773529 | orchestrator | 2025-07-12 13:54:26.773534 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-07-12 13:54:26.773544 | orchestrator | Saturday 12 July 2025 13:51:40 +0000 (0:00:00.348) 0:09:19.599 ********* 2025-07-12 13:54:26.773549 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.773555 | orchestrator | 2025-07-12 13:54:26.773560 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-07-12 13:54:26.773565 | orchestrator | Saturday 12 July 2025 13:51:40 +0000 (0:00:00.739) 0:09:20.338 ********* 2025-07-12 13:54:26.773571 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.773576 | orchestrator | 2025-07-12 13:54:26.773581 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-07-12 13:54:26.773587 | orchestrator | 2025-07-12 13:54:26.773592 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 13:54:26.773597 | orchestrator | Saturday 12 July 2025 13:51:41 +0000 (0:00:00.694) 0:09:21.033 ********* 2025-07-12 13:54:26.773603 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.773608 | orchestrator | 2025-07-12 13:54:26.773613 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 13:54:26.773619 | orchestrator | Saturday 12 July 2025 13:51:42 +0000 (0:00:01.237) 0:09:22.270 ********* 2025-07-12 13:54:26.773624 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.773629 | orchestrator | 2025-07-12 13:54:26.773635 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 13:54:26.773640 | orchestrator | Saturday 12 July 2025 13:51:44 +0000 (0:00:01.256) 0:09:23.526 ********* 2025-07-12 13:54:26.773645 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.773651 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.773656 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.773661 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.773667 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.773672 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.773677 | orchestrator | 2025-07-12 13:54:26.773683 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 13:54:26.773688 | orchestrator | Saturday 12 July 2025 13:51:44 +0000 (0:00:00.869) 0:09:24.396 ********* 2025-07-12 13:54:26.773693 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.773699 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.773704 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.773709 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.773715 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.773720 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.773725 | orchestrator | 2025-07-12 13:54:26.773730 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 13:54:26.773736 | orchestrator | Saturday 12 July 2025 13:51:45 +0000 (0:00:01.003) 0:09:25.399 ********* 2025-07-12 13:54:26.773741 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.773746 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.773752 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.773757 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.773762 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.773767 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.773772 | orchestrator | 2025-07-12 13:54:26.773778 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 13:54:26.773783 | orchestrator | Saturday 12 July 2025 13:51:47 +0000 (0:00:01.246) 0:09:26.646 ********* 2025-07-12 13:54:26.773789 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.773794 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.773799 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.773804 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.773810 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.773819 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.773824 | orchestrator | 2025-07-12 13:54:26.773830 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 13:54:26.773835 | orchestrator | Saturday 12 July 2025 13:51:48 +0000 (0:00:00.985) 0:09:27.631 ********* 2025-07-12 13:54:26.773840 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.773846 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.773851 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.773856 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.773865 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.773870 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.773875 | orchestrator | 2025-07-12 13:54:26.773881 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 13:54:26.773886 | orchestrator | Saturday 12 July 2025 13:51:49 +0000 (0:00:00.903) 0:09:28.534 ********* 2025-07-12 13:54:26.773892 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.773897 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.773902 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.773907 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.773913 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.773918 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.773923 | orchestrator | 2025-07-12 13:54:26.773932 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 13:54:26.773937 | orchestrator | Saturday 12 July 2025 13:51:49 +0000 (0:00:00.601) 0:09:29.136 ********* 2025-07-12 13:54:26.773943 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.773948 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.773953 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.773958 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.773963 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.773969 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.773974 | orchestrator | 2025-07-12 13:54:26.773979 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 13:54:26.773984 | orchestrator | Saturday 12 July 2025 13:51:50 +0000 (0:00:00.836) 0:09:29.972 ********* 2025-07-12 13:54:26.773990 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.773995 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.774000 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.774005 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.774010 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.774030 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.774035 | orchestrator | 2025-07-12 13:54:26.774040 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 13:54:26.774045 | orchestrator | Saturday 12 July 2025 13:51:51 +0000 (0:00:01.010) 0:09:30.983 ********* 2025-07-12 13:54:26.774049 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.774054 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.774059 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.774063 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.774068 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.774073 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.774077 | orchestrator | 2025-07-12 13:54:26.774082 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 13:54:26.774087 | orchestrator | Saturday 12 July 2025 13:51:53 +0000 (0:00:01.595) 0:09:32.579 ********* 2025-07-12 13:54:26.774091 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.774096 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.774101 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.774105 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.774110 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.774114 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.774119 | orchestrator | 2025-07-12 13:54:26.774124 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 13:54:26.774129 | orchestrator | Saturday 12 July 2025 13:51:53 +0000 (0:00:00.606) 0:09:33.186 ********* 2025-07-12 13:54:26.774137 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.774141 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.774146 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.774151 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.774156 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.774160 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.774165 | orchestrator | 2025-07-12 13:54:26.774170 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 13:54:26.774174 | orchestrator | Saturday 12 July 2025 13:51:54 +0000 (0:00:00.800) 0:09:33.986 ********* 2025-07-12 13:54:26.774179 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.774184 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.774188 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.774193 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.774198 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.774202 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.774207 | orchestrator | 2025-07-12 13:54:26.774212 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 13:54:26.774217 | orchestrator | Saturday 12 July 2025 13:51:55 +0000 (0:00:00.667) 0:09:34.654 ********* 2025-07-12 13:54:26.774221 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.774226 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.774231 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.774235 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.774240 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.774245 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.774249 | orchestrator | 2025-07-12 13:54:26.774254 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 13:54:26.774259 | orchestrator | Saturday 12 July 2025 13:51:56 +0000 (0:00:00.928) 0:09:35.582 ********* 2025-07-12 13:54:26.774263 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.774268 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.774273 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.774277 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.774282 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.774287 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.774291 | orchestrator | 2025-07-12 13:54:26.774296 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 13:54:26.774301 | orchestrator | Saturday 12 July 2025 13:51:56 +0000 (0:00:00.632) 0:09:36.214 ********* 2025-07-12 13:54:26.774305 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.774310 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.774315 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.774319 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.774324 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.774329 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.774333 | orchestrator | 2025-07-12 13:54:26.774338 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 13:54:26.774343 | orchestrator | Saturday 12 July 2025 13:51:57 +0000 (0:00:00.808) 0:09:37.023 ********* 2025-07-12 13:54:26.774347 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:26.774352 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:26.774357 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:26.774364 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.774368 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.774373 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.774378 | orchestrator | 2025-07-12 13:54:26.774382 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 13:54:26.774387 | orchestrator | Saturday 12 July 2025 13:51:58 +0000 (0:00:00.605) 0:09:37.629 ********* 2025-07-12 13:54:26.774392 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.774396 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.774401 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.774406 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.774414 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.774418 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.774423 | orchestrator | 2025-07-12 13:54:26.774431 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 13:54:26.774436 | orchestrator | Saturday 12 July 2025 13:51:58 +0000 (0:00:00.849) 0:09:38.479 ********* 2025-07-12 13:54:26.774441 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.774462 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.774467 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.774471 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.774476 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.774481 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.774485 | orchestrator | 2025-07-12 13:54:26.774490 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 13:54:26.774495 | orchestrator | Saturday 12 July 2025 13:51:59 +0000 (0:00:00.736) 0:09:39.215 ********* 2025-07-12 13:54:26.774500 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.774504 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.774509 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.774514 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.774518 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.774523 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.774527 | orchestrator | 2025-07-12 13:54:26.774533 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-07-12 13:54:26.774537 | orchestrator | Saturday 12 July 2025 13:52:01 +0000 (0:00:01.450) 0:09:40.665 ********* 2025-07-12 13:54:26.774542 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:26.774547 | orchestrator | 2025-07-12 13:54:26.774551 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-07-12 13:54:26.774556 | orchestrator | Saturday 12 July 2025 13:52:05 +0000 (0:00:04.002) 0:09:44.668 ********* 2025-07-12 13:54:26.774561 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.774565 | orchestrator | 2025-07-12 13:54:26.774570 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-07-12 13:54:26.774575 | orchestrator | Saturday 12 July 2025 13:52:07 +0000 (0:00:02.055) 0:09:46.723 ********* 2025-07-12 13:54:26.774580 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.774584 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:26.774589 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:26.774594 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:26.774599 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:26.774603 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:26.774608 | orchestrator | 2025-07-12 13:54:26.774613 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-07-12 13:54:26.774617 | orchestrator | Saturday 12 July 2025 13:52:08 +0000 (0:00:01.745) 0:09:48.469 ********* 2025-07-12 13:54:26.774622 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:26.774627 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:26.774631 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:26.774636 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:26.774641 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:26.774645 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:26.774650 | orchestrator | 2025-07-12 13:54:26.774655 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-07-12 13:54:26.774660 | orchestrator | Saturday 12 July 2025 13:52:10 +0000 (0:00:01.190) 0:09:49.660 ********* 2025-07-12 13:54:26.774665 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.774670 | orchestrator | 2025-07-12 13:54:26.774674 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-07-12 13:54:26.774679 | orchestrator | Saturday 12 July 2025 13:52:11 +0000 (0:00:01.448) 0:09:51.109 ********* 2025-07-12 13:54:26.774684 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:26.774689 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:26.774697 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:26.774701 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:26.774706 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:26.774711 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:26.774715 | orchestrator | 2025-07-12 13:54:26.774720 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-07-12 13:54:26.774725 | orchestrator | Saturday 12 July 2025 13:52:13 +0000 (0:00:01.956) 0:09:53.066 ********* 2025-07-12 13:54:26.774730 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:26.774734 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:26.774739 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:26.774744 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:26.774748 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:26.774753 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:26.774758 | orchestrator | 2025-07-12 13:54:26.774762 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-07-12 13:54:26.774767 | orchestrator | Saturday 12 July 2025 13:52:16 +0000 (0:00:03.183) 0:09:56.249 ********* 2025-07-12 13:54:26.774772 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.774777 | orchestrator | 2025-07-12 13:54:26.774781 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-07-12 13:54:26.774786 | orchestrator | Saturday 12 July 2025 13:52:18 +0000 (0:00:01.288) 0:09:57.537 ********* 2025-07-12 13:54:26.774791 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.774796 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.774800 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.774805 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.774814 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.774819 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.774823 | orchestrator | 2025-07-12 13:54:26.774828 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-07-12 13:54:26.774833 | orchestrator | Saturday 12 July 2025 13:52:18 +0000 (0:00:00.867) 0:09:58.404 ********* 2025-07-12 13:54:26.774838 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:26.774843 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:26.774847 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:26.774852 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:26.774857 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:26.774861 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:26.774866 | orchestrator | 2025-07-12 13:54:26.774871 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-07-12 13:54:26.774882 | orchestrator | Saturday 12 July 2025 13:52:21 +0000 (0:00:02.379) 0:10:00.784 ********* 2025-07-12 13:54:26.774887 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:26.774892 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:26.774897 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:26.774901 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.774906 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.774911 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.774915 | orchestrator | 2025-07-12 13:54:26.774920 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-07-12 13:54:26.774925 | orchestrator | 2025-07-12 13:54:26.774930 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 13:54:26.774934 | orchestrator | Saturday 12 July 2025 13:52:22 +0000 (0:00:01.475) 0:10:02.260 ********* 2025-07-12 13:54:26.774939 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.774944 | orchestrator | 2025-07-12 13:54:26.774948 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 13:54:26.774953 | orchestrator | Saturday 12 July 2025 13:52:23 +0000 (0:00:00.862) 0:10:03.123 ********* 2025-07-12 13:54:26.774958 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.774966 | orchestrator | 2025-07-12 13:54:26.774971 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 13:54:26.774976 | orchestrator | Saturday 12 July 2025 13:52:24 +0000 (0:00:01.064) 0:10:04.188 ********* 2025-07-12 13:54:26.774981 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.774985 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.774990 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.774995 | orchestrator | 2025-07-12 13:54:26.774999 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 13:54:26.775004 | orchestrator | Saturday 12 July 2025 13:52:25 +0000 (0:00:00.442) 0:10:04.630 ********* 2025-07-12 13:54:26.775009 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.775014 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.775018 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.775023 | orchestrator | 2025-07-12 13:54:26.775028 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 13:54:26.775032 | orchestrator | Saturday 12 July 2025 13:52:25 +0000 (0:00:00.781) 0:10:05.411 ********* 2025-07-12 13:54:26.775037 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.775042 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.775046 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.775051 | orchestrator | 2025-07-12 13:54:26.775056 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 13:54:26.775061 | orchestrator | Saturday 12 July 2025 13:52:27 +0000 (0:00:01.243) 0:10:06.654 ********* 2025-07-12 13:54:26.775065 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.775070 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.775075 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.775079 | orchestrator | 2025-07-12 13:54:26.775084 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 13:54:26.775089 | orchestrator | Saturday 12 July 2025 13:52:27 +0000 (0:00:00.825) 0:10:07.480 ********* 2025-07-12 13:54:26.775093 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.775098 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.775103 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.775107 | orchestrator | 2025-07-12 13:54:26.775112 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 13:54:26.775117 | orchestrator | Saturday 12 July 2025 13:52:28 +0000 (0:00:00.393) 0:10:07.874 ********* 2025-07-12 13:54:26.775122 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.775126 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.775131 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.775136 | orchestrator | 2025-07-12 13:54:26.775141 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 13:54:26.775145 | orchestrator | Saturday 12 July 2025 13:52:28 +0000 (0:00:00.354) 0:10:08.228 ********* 2025-07-12 13:54:26.775150 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.775155 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.775159 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.775164 | orchestrator | 2025-07-12 13:54:26.775169 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 13:54:26.775173 | orchestrator | Saturday 12 July 2025 13:52:29 +0000 (0:00:00.612) 0:10:08.840 ********* 2025-07-12 13:54:26.775178 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.775183 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.775187 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.775192 | orchestrator | 2025-07-12 13:54:26.775197 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 13:54:26.775202 | orchestrator | Saturday 12 July 2025 13:52:30 +0000 (0:00:00.781) 0:10:09.622 ********* 2025-07-12 13:54:26.775206 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.775211 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.775216 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.775224 | orchestrator | 2025-07-12 13:54:26.775228 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 13:54:26.775233 | orchestrator | Saturday 12 July 2025 13:52:31 +0000 (0:00:00.918) 0:10:10.541 ********* 2025-07-12 13:54:26.775238 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.775245 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.775250 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.775255 | orchestrator | 2025-07-12 13:54:26.775259 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 13:54:26.775264 | orchestrator | Saturday 12 July 2025 13:52:31 +0000 (0:00:00.323) 0:10:10.864 ********* 2025-07-12 13:54:26.775269 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.775274 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.775278 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.775283 | orchestrator | 2025-07-12 13:54:26.775288 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 13:54:26.775292 | orchestrator | Saturday 12 July 2025 13:52:32 +0000 (0:00:00.667) 0:10:11.532 ********* 2025-07-12 13:54:26.775300 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.775305 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.775310 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.775315 | orchestrator | 2025-07-12 13:54:26.775319 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 13:54:26.775324 | orchestrator | Saturday 12 July 2025 13:52:32 +0000 (0:00:00.403) 0:10:11.935 ********* 2025-07-12 13:54:26.775329 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.775334 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.775338 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.775343 | orchestrator | 2025-07-12 13:54:26.775348 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 13:54:26.775352 | orchestrator | Saturday 12 July 2025 13:52:32 +0000 (0:00:00.464) 0:10:12.400 ********* 2025-07-12 13:54:26.775357 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.775362 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.775366 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.775371 | orchestrator | 2025-07-12 13:54:26.775376 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 13:54:26.775381 | orchestrator | Saturday 12 July 2025 13:52:33 +0000 (0:00:00.359) 0:10:12.760 ********* 2025-07-12 13:54:26.775385 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.775390 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.775395 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.775400 | orchestrator | 2025-07-12 13:54:26.775404 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 13:54:26.775409 | orchestrator | Saturday 12 July 2025 13:52:34 +0000 (0:00:00.844) 0:10:13.604 ********* 2025-07-12 13:54:26.775414 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.775419 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.775423 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.775428 | orchestrator | 2025-07-12 13:54:26.775433 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 13:54:26.775438 | orchestrator | Saturday 12 July 2025 13:52:34 +0000 (0:00:00.358) 0:10:13.962 ********* 2025-07-12 13:54:26.775456 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.775461 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.775466 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.775471 | orchestrator | 2025-07-12 13:54:26.775476 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 13:54:26.775480 | orchestrator | Saturday 12 July 2025 13:52:34 +0000 (0:00:00.409) 0:10:14.371 ********* 2025-07-12 13:54:26.775485 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.775490 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.775495 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.775499 | orchestrator | 2025-07-12 13:54:26.775504 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 13:54:26.775513 | orchestrator | Saturday 12 July 2025 13:52:35 +0000 (0:00:00.447) 0:10:14.819 ********* 2025-07-12 13:54:26.775517 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.775522 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.775527 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.775531 | orchestrator | 2025-07-12 13:54:26.775536 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-07-12 13:54:26.775541 | orchestrator | Saturday 12 July 2025 13:52:36 +0000 (0:00:01.130) 0:10:15.950 ********* 2025-07-12 13:54:26.775546 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.775550 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.775555 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-07-12 13:54:26.775560 | orchestrator | 2025-07-12 13:54:26.775565 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-07-12 13:54:26.775569 | orchestrator | Saturday 12 July 2025 13:52:36 +0000 (0:00:00.500) 0:10:16.451 ********* 2025-07-12 13:54:26.775574 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-12 13:54:26.775579 | orchestrator | 2025-07-12 13:54:26.775583 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-07-12 13:54:26.775588 | orchestrator | Saturday 12 July 2025 13:52:39 +0000 (0:00:02.275) 0:10:18.726 ********* 2025-07-12 13:54:26.775594 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-07-12 13:54:26.775600 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.775604 | orchestrator | 2025-07-12 13:54:26.775609 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-07-12 13:54:26.775614 | orchestrator | Saturday 12 July 2025 13:52:39 +0000 (0:00:00.214) 0:10:18.940 ********* 2025-07-12 13:54:26.775620 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 13:54:26.775631 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 13:54:26.775636 | orchestrator | 2025-07-12 13:54:26.775640 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-07-12 13:54:26.775645 | orchestrator | Saturday 12 July 2025 13:52:48 +0000 (0:00:08.985) 0:10:27.926 ********* 2025-07-12 13:54:26.775650 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-12 13:54:26.775654 | orchestrator | 2025-07-12 13:54:26.775659 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-07-12 13:54:26.775664 | orchestrator | Saturday 12 July 2025 13:52:52 +0000 (0:00:03.708) 0:10:31.635 ********* 2025-07-12 13:54:26.775672 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.775677 | orchestrator | 2025-07-12 13:54:26.775681 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-07-12 13:54:26.775686 | orchestrator | Saturday 12 July 2025 13:52:52 +0000 (0:00:00.529) 0:10:32.164 ********* 2025-07-12 13:54:26.775691 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-07-12 13:54:26.775695 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-07-12 13:54:26.775700 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-07-12 13:54:26.775705 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-07-12 13:54:26.775709 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-07-12 13:54:26.775757 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-07-12 13:54:26.775761 | orchestrator | 2025-07-12 13:54:26.775766 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-07-12 13:54:26.775771 | orchestrator | Saturday 12 July 2025 13:52:53 +0000 (0:00:01.151) 0:10:33.316 ********* 2025-07-12 13:54:26.775776 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:54:26.775780 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-12 13:54:26.775785 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 13:54:26.775790 | orchestrator | 2025-07-12 13:54:26.775794 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-07-12 13:54:26.775799 | orchestrator | Saturday 12 July 2025 13:52:56 +0000 (0:00:02.468) 0:10:35.785 ********* 2025-07-12 13:54:26.775804 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 13:54:26.775809 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-12 13:54:26.775813 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:26.775818 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 13:54:26.775823 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-07-12 13:54:26.775827 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:26.775832 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 13:54:26.775837 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-07-12 13:54:26.775841 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:26.775846 | orchestrator | 2025-07-12 13:54:26.775851 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-07-12 13:54:26.775855 | orchestrator | Saturday 12 July 2025 13:52:57 +0000 (0:00:01.502) 0:10:37.287 ********* 2025-07-12 13:54:26.775860 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:26.775865 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:26.775870 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:26.775874 | orchestrator | 2025-07-12 13:54:26.775879 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-07-12 13:54:26.775884 | orchestrator | Saturday 12 July 2025 13:53:00 +0000 (0:00:02.693) 0:10:39.981 ********* 2025-07-12 13:54:26.775888 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.775893 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.775898 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.775902 | orchestrator | 2025-07-12 13:54:26.775907 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-07-12 13:54:26.775912 | orchestrator | Saturday 12 July 2025 13:53:00 +0000 (0:00:00.324) 0:10:40.305 ********* 2025-07-12 13:54:26.775917 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.775921 | orchestrator | 2025-07-12 13:54:26.775926 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-07-12 13:54:26.775931 | orchestrator | Saturday 12 July 2025 13:53:01 +0000 (0:00:00.751) 0:10:41.056 ********* 2025-07-12 13:54:26.775935 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.775940 | orchestrator | 2025-07-12 13:54:26.775945 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-07-12 13:54:26.775950 | orchestrator | Saturday 12 July 2025 13:53:02 +0000 (0:00:00.541) 0:10:41.598 ********* 2025-07-12 13:54:26.775954 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:26.775959 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:26.775964 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:26.775968 | orchestrator | 2025-07-12 13:54:26.775973 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-07-12 13:54:26.775978 | orchestrator | Saturday 12 July 2025 13:53:03 +0000 (0:00:01.314) 0:10:42.913 ********* 2025-07-12 13:54:26.775982 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:26.775991 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:26.775995 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:26.776000 | orchestrator | 2025-07-12 13:54:26.776005 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-07-12 13:54:26.776010 | orchestrator | Saturday 12 July 2025 13:53:04 +0000 (0:00:01.508) 0:10:44.421 ********* 2025-07-12 13:54:26.776014 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:26.776019 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:26.776026 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:26.776031 | orchestrator | 2025-07-12 13:54:26.776036 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-07-12 13:54:26.776041 | orchestrator | Saturday 12 July 2025 13:53:06 +0000 (0:00:01.801) 0:10:46.223 ********* 2025-07-12 13:54:26.776045 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:26.776050 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:26.776055 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:26.776059 | orchestrator | 2025-07-12 13:54:26.776064 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-07-12 13:54:26.776069 | orchestrator | Saturday 12 July 2025 13:53:08 +0000 (0:00:01.944) 0:10:48.167 ********* 2025-07-12 13:54:26.776074 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.776081 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.776086 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.776091 | orchestrator | 2025-07-12 13:54:26.776095 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-12 13:54:26.776100 | orchestrator | Saturday 12 July 2025 13:53:10 +0000 (0:00:01.513) 0:10:49.681 ********* 2025-07-12 13:54:26.776105 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:26.776109 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:26.776114 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:26.776119 | orchestrator | 2025-07-12 13:54:26.776124 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-07-12 13:54:26.776129 | orchestrator | Saturday 12 July 2025 13:53:10 +0000 (0:00:00.707) 0:10:50.388 ********* 2025-07-12 13:54:26.776133 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.776138 | orchestrator | 2025-07-12 13:54:26.776143 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-07-12 13:54:26.776147 | orchestrator | Saturday 12 July 2025 13:53:11 +0000 (0:00:00.810) 0:10:51.199 ********* 2025-07-12 13:54:26.776152 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.776157 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.776161 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.776166 | orchestrator | 2025-07-12 13:54:26.776171 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-07-12 13:54:26.776175 | orchestrator | Saturday 12 July 2025 13:53:12 +0000 (0:00:00.328) 0:10:51.528 ********* 2025-07-12 13:54:26.776180 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:26.776185 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:26.776189 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:26.776194 | orchestrator | 2025-07-12 13:54:26.776199 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-07-12 13:54:26.776204 | orchestrator | Saturday 12 July 2025 13:53:13 +0000 (0:00:01.221) 0:10:52.749 ********* 2025-07-12 13:54:26.776208 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 13:54:26.776213 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 13:54:26.776218 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 13:54:26.776222 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.776227 | orchestrator | 2025-07-12 13:54:26.776232 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-07-12 13:54:26.776237 | orchestrator | Saturday 12 July 2025 13:53:14 +0000 (0:00:00.845) 0:10:53.595 ********* 2025-07-12 13:54:26.776241 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.776249 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.776254 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.776259 | orchestrator | 2025-07-12 13:54:26.776264 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-07-12 13:54:26.776268 | orchestrator | 2025-07-12 13:54:26.776273 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 13:54:26.776278 | orchestrator | Saturday 12 July 2025 13:53:14 +0000 (0:00:00.840) 0:10:54.435 ********* 2025-07-12 13:54:26.776283 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.776287 | orchestrator | 2025-07-12 13:54:26.776292 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 13:54:26.776297 | orchestrator | Saturday 12 July 2025 13:53:15 +0000 (0:00:00.531) 0:10:54.967 ********* 2025-07-12 13:54:26.776302 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.776306 | orchestrator | 2025-07-12 13:54:26.776311 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 13:54:26.776316 | orchestrator | Saturday 12 July 2025 13:53:16 +0000 (0:00:00.791) 0:10:55.758 ********* 2025-07-12 13:54:26.776321 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.776325 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.776330 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.776335 | orchestrator | 2025-07-12 13:54:26.776339 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 13:54:26.776344 | orchestrator | Saturday 12 July 2025 13:53:16 +0000 (0:00:00.306) 0:10:56.065 ********* 2025-07-12 13:54:26.776349 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.776353 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.776358 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.776363 | orchestrator | 2025-07-12 13:54:26.776367 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 13:54:26.776372 | orchestrator | Saturday 12 July 2025 13:53:17 +0000 (0:00:00.736) 0:10:56.801 ********* 2025-07-12 13:54:26.776377 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.776382 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.776386 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.776391 | orchestrator | 2025-07-12 13:54:26.776396 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 13:54:26.776400 | orchestrator | Saturday 12 July 2025 13:53:18 +0000 (0:00:00.748) 0:10:57.550 ********* 2025-07-12 13:54:26.776405 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.776410 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.776414 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.776419 | orchestrator | 2025-07-12 13:54:26.776427 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 13:54:26.776432 | orchestrator | Saturday 12 July 2025 13:53:19 +0000 (0:00:01.068) 0:10:58.619 ********* 2025-07-12 13:54:26.776436 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.776441 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.776458 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.776463 | orchestrator | 2025-07-12 13:54:26.776468 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 13:54:26.776473 | orchestrator | Saturday 12 July 2025 13:53:19 +0000 (0:00:00.333) 0:10:58.952 ********* 2025-07-12 13:54:26.776478 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.776482 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.776487 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.776492 | orchestrator | 2025-07-12 13:54:26.776499 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 13:54:26.776504 | orchestrator | Saturday 12 July 2025 13:53:19 +0000 (0:00:00.319) 0:10:59.271 ********* 2025-07-12 13:54:26.776509 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.776518 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.776523 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.776527 | orchestrator | 2025-07-12 13:54:26.776532 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 13:54:26.776537 | orchestrator | Saturday 12 July 2025 13:53:20 +0000 (0:00:00.348) 0:10:59.620 ********* 2025-07-12 13:54:26.776542 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.776546 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.776551 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.776556 | orchestrator | 2025-07-12 13:54:26.776561 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 13:54:26.776565 | orchestrator | Saturday 12 July 2025 13:53:21 +0000 (0:00:01.166) 0:11:00.787 ********* 2025-07-12 13:54:26.776570 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.776575 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.776579 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.776584 | orchestrator | 2025-07-12 13:54:26.776589 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 13:54:26.776593 | orchestrator | Saturday 12 July 2025 13:53:22 +0000 (0:00:00.921) 0:11:01.709 ********* 2025-07-12 13:54:26.776598 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.776603 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.776608 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.776612 | orchestrator | 2025-07-12 13:54:26.776617 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 13:54:26.776622 | orchestrator | Saturday 12 July 2025 13:53:22 +0000 (0:00:00.367) 0:11:02.076 ********* 2025-07-12 13:54:26.776626 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.776631 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.776635 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.776640 | orchestrator | 2025-07-12 13:54:26.776645 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 13:54:26.776650 | orchestrator | Saturday 12 July 2025 13:53:22 +0000 (0:00:00.396) 0:11:02.473 ********* 2025-07-12 13:54:26.776654 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.776659 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.776664 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.776668 | orchestrator | 2025-07-12 13:54:26.776673 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 13:54:26.776678 | orchestrator | Saturday 12 July 2025 13:53:23 +0000 (0:00:00.841) 0:11:03.315 ********* 2025-07-12 13:54:26.776683 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.776687 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.776692 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.776697 | orchestrator | 2025-07-12 13:54:26.776701 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 13:54:26.776706 | orchestrator | Saturday 12 July 2025 13:53:24 +0000 (0:00:00.486) 0:11:03.801 ********* 2025-07-12 13:54:26.776711 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.776716 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.776720 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.776725 | orchestrator | 2025-07-12 13:54:26.776730 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 13:54:26.776734 | orchestrator | Saturday 12 July 2025 13:53:24 +0000 (0:00:00.476) 0:11:04.278 ********* 2025-07-12 13:54:26.776739 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.776744 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.776748 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.776753 | orchestrator | 2025-07-12 13:54:26.776758 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 13:54:26.776763 | orchestrator | Saturday 12 July 2025 13:53:25 +0000 (0:00:00.431) 0:11:04.710 ********* 2025-07-12 13:54:26.776767 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.776772 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.776777 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.776786 | orchestrator | 2025-07-12 13:54:26.776791 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 13:54:26.776796 | orchestrator | Saturday 12 July 2025 13:53:25 +0000 (0:00:00.636) 0:11:05.346 ********* 2025-07-12 13:54:26.776801 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.776805 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.776810 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.776815 | orchestrator | 2025-07-12 13:54:26.776819 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 13:54:26.776824 | orchestrator | Saturday 12 July 2025 13:53:26 +0000 (0:00:00.297) 0:11:05.644 ********* 2025-07-12 13:54:26.776829 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.776833 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.776838 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.776843 | orchestrator | 2025-07-12 13:54:26.776847 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 13:54:26.776852 | orchestrator | Saturday 12 July 2025 13:53:26 +0000 (0:00:00.398) 0:11:06.042 ********* 2025-07-12 13:54:26.776857 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.776861 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.776866 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.776871 | orchestrator | 2025-07-12 13:54:26.776876 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-07-12 13:54:26.776883 | orchestrator | Saturday 12 July 2025 13:53:27 +0000 (0:00:00.906) 0:11:06.948 ********* 2025-07-12 13:54:26.776888 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.776893 | orchestrator | 2025-07-12 13:54:26.776897 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-07-12 13:54:26.776902 | orchestrator | Saturday 12 July 2025 13:53:27 +0000 (0:00:00.549) 0:11:07.498 ********* 2025-07-12 13:54:26.776907 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:54:26.776912 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-12 13:54:26.776916 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 13:54:26.776921 | orchestrator | 2025-07-12 13:54:26.776928 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-07-12 13:54:26.776933 | orchestrator | Saturday 12 July 2025 13:53:30 +0000 (0:00:02.415) 0:11:09.914 ********* 2025-07-12 13:54:26.776938 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 13:54:26.776943 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-12 13:54:26.776948 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:26.776952 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 13:54:26.776957 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-07-12 13:54:26.776962 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:26.776966 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 13:54:26.776971 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-07-12 13:54:26.776976 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:26.776980 | orchestrator | 2025-07-12 13:54:26.776985 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-07-12 13:54:26.776990 | orchestrator | Saturday 12 July 2025 13:53:31 +0000 (0:00:01.459) 0:11:11.373 ********* 2025-07-12 13:54:26.776994 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.776999 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.777004 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.777008 | orchestrator | 2025-07-12 13:54:26.777013 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-07-12 13:54:26.777018 | orchestrator | Saturday 12 July 2025 13:53:32 +0000 (0:00:00.316) 0:11:11.690 ********* 2025-07-12 13:54:26.777022 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.777027 | orchestrator | 2025-07-12 13:54:26.777032 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-07-12 13:54:26.777040 | orchestrator | Saturday 12 July 2025 13:53:32 +0000 (0:00:00.556) 0:11:12.246 ********* 2025-07-12 13:54:26.777045 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-12 13:54:26.777050 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-12 13:54:26.777055 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-12 13:54:26.777059 | orchestrator | 2025-07-12 13:54:26.777064 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-07-12 13:54:26.777069 | orchestrator | Saturday 12 July 2025 13:53:34 +0000 (0:00:01.327) 0:11:13.574 ********* 2025-07-12 13:54:26.777074 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:54:26.777078 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-07-12 13:54:26.777083 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:54:26.777088 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:54:26.777092 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-07-12 13:54:26.777097 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-07-12 13:54:26.777102 | orchestrator | 2025-07-12 13:54:26.777107 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-07-12 13:54:26.777112 | orchestrator | Saturday 12 July 2025 13:53:39 +0000 (0:00:05.178) 0:11:18.752 ********* 2025-07-12 13:54:26.777116 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:54:26.777121 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 13:54:26.777126 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:54:26.777131 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 13:54:26.777135 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:54:26.777140 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 13:54:26.777145 | orchestrator | 2025-07-12 13:54:26.777149 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-07-12 13:54:26.777154 | orchestrator | Saturday 12 July 2025 13:53:41 +0000 (0:00:02.361) 0:11:21.114 ********* 2025-07-12 13:54:26.777159 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 13:54:26.777163 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:26.777168 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 13:54:26.777177 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:26.777182 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 13:54:26.777186 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:26.777191 | orchestrator | 2025-07-12 13:54:26.777196 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-07-12 13:54:26.777200 | orchestrator | Saturday 12 July 2025 13:53:42 +0000 (0:00:01.366) 0:11:22.480 ********* 2025-07-12 13:54:26.777205 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-07-12 13:54:26.777210 | orchestrator | 2025-07-12 13:54:26.777214 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-07-12 13:54:26.777222 | orchestrator | Saturday 12 July 2025 13:53:43 +0000 (0:00:00.247) 0:11:22.727 ********* 2025-07-12 13:54:26.777227 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 13:54:26.777235 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 13:54:26.777239 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 13:54:26.777244 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 13:54:26.777249 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 13:54:26.777254 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.777258 | orchestrator | 2025-07-12 13:54:26.777263 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-07-12 13:54:26.777268 | orchestrator | Saturday 12 July 2025 13:53:44 +0000 (0:00:00.907) 0:11:23.635 ********* 2025-07-12 13:54:26.777273 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 13:54:26.777277 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 13:54:26.777282 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 13:54:26.777287 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 13:54:26.777292 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 13:54:26.777296 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.777301 | orchestrator | 2025-07-12 13:54:26.777306 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-07-12 13:54:26.777310 | orchestrator | Saturday 12 July 2025 13:53:45 +0000 (0:00:01.149) 0:11:24.784 ********* 2025-07-12 13:54:26.777315 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-12 13:54:26.777320 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-12 13:54:26.777325 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-12 13:54:26.777329 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-12 13:54:26.777334 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-12 13:54:26.777339 | orchestrator | 2025-07-12 13:54:26.777344 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-07-12 13:54:26.777348 | orchestrator | Saturday 12 July 2025 13:54:13 +0000 (0:00:28.642) 0:11:53.427 ********* 2025-07-12 13:54:26.777353 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.777358 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.777363 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.777367 | orchestrator | 2025-07-12 13:54:26.777372 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-07-12 13:54:26.777377 | orchestrator | Saturday 12 July 2025 13:54:14 +0000 (0:00:00.287) 0:11:53.714 ********* 2025-07-12 13:54:26.777381 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.777386 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.777394 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.777399 | orchestrator | 2025-07-12 13:54:26.777404 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-07-12 13:54:26.777409 | orchestrator | Saturday 12 July 2025 13:54:14 +0000 (0:00:00.270) 0:11:53.984 ********* 2025-07-12 13:54:26.777413 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.777418 | orchestrator | 2025-07-12 13:54:26.777423 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-07-12 13:54:26.777430 | orchestrator | Saturday 12 July 2025 13:54:15 +0000 (0:00:00.614) 0:11:54.599 ********* 2025-07-12 13:54:26.777435 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.777439 | orchestrator | 2025-07-12 13:54:26.777476 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-07-12 13:54:26.777484 | orchestrator | Saturday 12 July 2025 13:54:15 +0000 (0:00:00.489) 0:11:55.089 ********* 2025-07-12 13:54:26.777492 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:26.777500 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:26.777507 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:26.777511 | orchestrator | 2025-07-12 13:54:26.777519 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-07-12 13:54:26.777524 | orchestrator | Saturday 12 July 2025 13:54:16 +0000 (0:00:01.324) 0:11:56.413 ********* 2025-07-12 13:54:26.777531 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:26.777538 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:26.777545 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:26.777552 | orchestrator | 2025-07-12 13:54:26.777558 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-07-12 13:54:26.777569 | orchestrator | Saturday 12 July 2025 13:54:18 +0000 (0:00:01.376) 0:11:57.789 ********* 2025-07-12 13:54:26.777580 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:26.777587 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:26.777594 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:26.777602 | orchestrator | 2025-07-12 13:54:26.777609 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-07-12 13:54:26.777617 | orchestrator | Saturday 12 July 2025 13:54:20 +0000 (0:00:01.789) 0:11:59.579 ********* 2025-07-12 13:54:26.777623 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-12 13:54:26.777630 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-12 13:54:26.777638 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-12 13:54:26.777645 | orchestrator | 2025-07-12 13:54:26.777652 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-12 13:54:26.777660 | orchestrator | Saturday 12 July 2025 13:54:22 +0000 (0:00:02.820) 0:12:02.399 ********* 2025-07-12 13:54:26.777668 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.777675 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.777683 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.777691 | orchestrator | 2025-07-12 13:54:26.777699 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-07-12 13:54:26.777707 | orchestrator | Saturday 12 July 2025 13:54:23 +0000 (0:00:00.381) 0:12:02.780 ********* 2025-07-12 13:54:26.777714 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:26.777721 | orchestrator | 2025-07-12 13:54:26.777726 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-07-12 13:54:26.777731 | orchestrator | Saturday 12 July 2025 13:54:23 +0000 (0:00:00.517) 0:12:03.298 ********* 2025-07-12 13:54:26.777736 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.777746 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.777751 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.777755 | orchestrator | 2025-07-12 13:54:26.777760 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-07-12 13:54:26.777765 | orchestrator | Saturday 12 July 2025 13:54:24 +0000 (0:00:00.567) 0:12:03.866 ********* 2025-07-12 13:54:26.777769 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.777774 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:26.777779 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:26.777783 | orchestrator | 2025-07-12 13:54:26.777788 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-07-12 13:54:26.777793 | orchestrator | Saturday 12 July 2025 13:54:24 +0000 (0:00:00.363) 0:12:04.229 ********* 2025-07-12 13:54:26.777797 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 13:54:26.777802 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 13:54:26.777807 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 13:54:26.777811 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:26.777816 | orchestrator | 2025-07-12 13:54:26.777821 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-07-12 13:54:26.777825 | orchestrator | Saturday 12 July 2025 13:54:25 +0000 (0:00:00.602) 0:12:04.831 ********* 2025-07-12 13:54:26.777830 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:26.777835 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:26.777839 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:26.777844 | orchestrator | 2025-07-12 13:54:26.777849 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:54:26.777853 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-07-12 13:54:26.777858 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-07-12 13:54:26.777863 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-07-12 13:54:26.777868 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-07-12 13:54:26.777876 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-07-12 13:54:26.777881 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-07-12 13:54:26.777886 | orchestrator | 2025-07-12 13:54:26.777891 | orchestrator | 2025-07-12 13:54:26.777895 | orchestrator | 2025-07-12 13:54:26.777900 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:54:26.777905 | orchestrator | Saturday 12 July 2025 13:54:25 +0000 (0:00:00.256) 0:12:05.088 ********* 2025-07-12 13:54:26.777913 | orchestrator | =============================================================================== 2025-07-12 13:54:26.777918 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------ 110.21s 2025-07-12 13:54:26.777923 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 45.48s 2025-07-12 13:54:26.777928 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 29.87s 2025-07-12 13:54:26.777932 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 28.64s 2025-07-12 13:54:26.777937 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.86s 2025-07-12 13:54:26.777942 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.00s 2025-07-12 13:54:26.777946 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.00s 2025-07-12 13:54:26.777954 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.33s 2025-07-12 13:54:26.777958 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.05s 2025-07-12 13:54:26.777963 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.99s 2025-07-12 13:54:26.777967 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.77s 2025-07-12 13:54:26.777972 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.19s 2025-07-12 13:54:26.777976 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 5.18s 2025-07-12 13:54:26.777980 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.76s 2025-07-12 13:54:26.777985 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.00s 2025-07-12 13:54:26.777989 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.71s 2025-07-12 13:54:26.777993 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.67s 2025-07-12 13:54:26.777998 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.50s 2025-07-12 13:54:26.778002 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.49s 2025-07-12 13:54:26.778007 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.18s 2025-07-12 13:54:26.778011 | orchestrator | 2025-07-12 13:54:26 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:54:26.778033 | orchestrator | 2025-07-12 13:54:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:29.801397 | orchestrator | 2025-07-12 13:54:29 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:54:29.802346 | orchestrator | 2025-07-12 13:54:29 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:54:29.803877 | orchestrator | 2025-07-12 13:54:29 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:54:29.804104 | orchestrator | 2025-07-12 13:54:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:32.853429 | orchestrator | 2025-07-12 13:54:32 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:54:32.855063 | orchestrator | 2025-07-12 13:54:32 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:54:32.857326 | orchestrator | 2025-07-12 13:54:32 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:54:32.857659 | orchestrator | 2025-07-12 13:54:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:35.906520 | orchestrator | 2025-07-12 13:54:35 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:54:35.908798 | orchestrator | 2025-07-12 13:54:35 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:54:35.910246 | orchestrator | 2025-07-12 13:54:35 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:54:35.910274 | orchestrator | 2025-07-12 13:54:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:38.953209 | orchestrator | 2025-07-12 13:54:38 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state STARTED 2025-07-12 13:54:38.953835 | orchestrator | 2025-07-12 13:54:38 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:54:38.956021 | orchestrator | 2025-07-12 13:54:38 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:54:38.956072 | orchestrator | 2025-07-12 13:54:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:41.999882 | orchestrator | 2025-07-12 13:54:41 | INFO  | Task 7ab15e41-5dbb-49bc-a91d-3da67b000176 is in state SUCCESS 2025-07-12 13:54:42.001036 | orchestrator | 2025-07-12 13:54:42.001074 | orchestrator | 2025-07-12 13:54:42.001086 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 13:54:42.001097 | orchestrator | 2025-07-12 13:54:42.001107 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 13:54:42.001117 | orchestrator | Saturday 12 July 2025 13:51:57 +0000 (0:00:00.267) 0:00:00.267 ********* 2025-07-12 13:54:42.001128 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:42.001140 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:42.001149 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:42.001158 | orchestrator | 2025-07-12 13:54:42.001168 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 13:54:42.001178 | orchestrator | Saturday 12 July 2025 13:51:57 +0000 (0:00:00.281) 0:00:00.548 ********* 2025-07-12 13:54:42.001188 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-07-12 13:54:42.001198 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-07-12 13:54:42.001207 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-07-12 13:54:42.001217 | orchestrator | 2025-07-12 13:54:42.001226 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-07-12 13:54:42.001236 | orchestrator | 2025-07-12 13:54:42.001245 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-12 13:54:42.001254 | orchestrator | Saturday 12 July 2025 13:51:58 +0000 (0:00:00.455) 0:00:01.004 ********* 2025-07-12 13:54:42.001264 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:54:42.001273 | orchestrator | 2025-07-12 13:54:42.001283 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-07-12 13:54:42.001292 | orchestrator | Saturday 12 July 2025 13:51:58 +0000 (0:00:00.525) 0:00:01.530 ********* 2025-07-12 13:54:42.001320 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-12 13:54:42.001331 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-12 13:54:42.001340 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-12 13:54:42.001350 | orchestrator | 2025-07-12 13:54:42.001359 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-07-12 13:54:42.001368 | orchestrator | Saturday 12 July 2025 13:51:59 +0000 (0:00:00.634) 0:00:02.164 ********* 2025-07-12 13:54:42.001381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 13:54:42.001397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 13:54:42.001474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 13:54:42.001490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 13:54:42.001572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 13:54:42.001693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 13:54:42.001725 | orchestrator | 2025-07-12 13:54:42.001736 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-12 13:54:42.001747 | orchestrator | Saturday 12 July 2025 13:52:01 +0000 (0:00:01.838) 0:00:04.002 ********* 2025-07-12 13:54:42.001757 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:54:42.001767 | orchestrator | 2025-07-12 13:54:42.001776 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-07-12 13:54:42.001786 | orchestrator | Saturday 12 July 2025 13:52:01 +0000 (0:00:00.510) 0:00:04.513 ********* 2025-07-12 13:54:42.001812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 13:54:42.001824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 13:54:42.001835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 13:54:42.001845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 13:54:42.001875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 13:54:42.001887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 13:54:42.001897 | orchestrator | 2025-07-12 13:54:42.001907 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-07-12 13:54:42.001917 | orchestrator | Saturday 12 July 2025 13:52:04 +0000 (0:00:02.513) 0:00:07.026 ********* 2025-07-12 13:54:42.001927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 13:54:42.001938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 13:54:42.001954 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:42.001977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 13:54:42.001988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 13:54:42.001999 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:42.002009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 13:54:42.002092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 13:54:42.002111 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:42.002121 | orchestrator | 2025-07-12 13:54:42.002131 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-07-12 13:54:42.002140 | orchestrator | Saturday 12 July 2025 13:52:05 +0000 (0:00:01.366) 0:00:08.392 ********* 2025-07-12 13:54:42.002162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 13:54:42.002174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 13:54:42.002184 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:42.002194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 13:54:42.002213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 13:54:42.002225 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:42.002249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 13:54:42.002262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 13:54:42.002274 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:42.002285 | orchestrator | 2025-07-12 13:54:42.002296 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-07-12 13:54:42.002307 | orchestrator | Saturday 12 July 2025 13:52:06 +0000 (0:00:00.883) 0:00:09.276 ********* 2025-07-12 13:54:42.002319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 13:54:42.002337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 13:54:42.002349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 13:54:42.002374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 13:54:42.002387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 13:54:42.002411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 13:54:42.002423 | orchestrator | 2025-07-12 13:54:42.002454 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-07-12 13:54:42.002466 | orchestrator | Saturday 12 July 2025 13:52:08 +0000 (0:00:02.284) 0:00:11.560 ********* 2025-07-12 13:54:42.002477 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:42.002488 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:42.002499 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:42.002511 | orchestrator | 2025-07-12 13:54:42.002522 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-07-12 13:54:42.002532 | orchestrator | Saturday 12 July 2025 13:52:12 +0000 (0:00:03.865) 0:00:15.426 ********* 2025-07-12 13:54:42.002543 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:42.002554 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:42.002566 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:42.002575 | orchestrator | 2025-07-12 13:54:42.002585 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-07-12 13:54:42.002595 | orchestrator | Saturday 12 July 2025 13:52:14 +0000 (0:00:01.774) 0:00:17.200 ********* 2025-07-12 13:54:42.002622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 13:54:42.002633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 13:54:42.002650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 13:54:42.002661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 13:54:42.002682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 13:54:42.002694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 13:54:42.002711 | orchestrator | 2025-07-12 13:54:42.002721 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-12 13:54:42.002731 | orchestrator | Saturday 12 July 2025 13:52:16 +0000 (0:00:02.068) 0:00:19.269 ********* 2025-07-12 13:54:42.002740 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:42.002750 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:42.002759 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:42.002769 | orchestrator | 2025-07-12 13:54:42.002778 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-07-12 13:54:42.002787 | orchestrator | Saturday 12 July 2025 13:52:16 +0000 (0:00:00.357) 0:00:19.627 ********* 2025-07-12 13:54:42.002797 | orchestrator | 2025-07-12 13:54:42.002806 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-07-12 13:54:42.002816 | orchestrator | Saturday 12 July 2025 13:52:17 +0000 (0:00:00.094) 0:00:19.722 ********* 2025-07-12 13:54:42.002825 | orchestrator | 2025-07-12 13:54:42.002835 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-07-12 13:54:42.002844 | orchestrator | Saturday 12 July 2025 13:52:17 +0000 (0:00:00.086) 0:00:19.808 ********* 2025-07-12 13:54:42.002854 | orchestrator | 2025-07-12 13:54:42.002863 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-07-12 13:54:42.002873 | orchestrator | Saturday 12 July 2025 13:52:17 +0000 (0:00:00.268) 0:00:20.077 ********* 2025-07-12 13:54:42.002882 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:42.002892 | orchestrator | 2025-07-12 13:54:42.002901 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-07-12 13:54:42.002911 | orchestrator | Saturday 12 July 2025 13:52:17 +0000 (0:00:00.195) 0:00:20.272 ********* 2025-07-12 13:54:42.002920 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:42.002930 | orchestrator | 2025-07-12 13:54:42.002940 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-07-12 13:54:42.002949 | orchestrator | Saturday 12 July 2025 13:52:17 +0000 (0:00:00.282) 0:00:20.554 ********* 2025-07-12 13:54:42.002958 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:42.002968 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:42.002977 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:42.002987 | orchestrator | 2025-07-12 13:54:42.002996 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-07-12 13:54:42.003006 | orchestrator | Saturday 12 July 2025 13:53:19 +0000 (0:01:01.561) 0:01:22.116 ********* 2025-07-12 13:54:42.003015 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:42.003025 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:42.003034 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:42.003043 | orchestrator | 2025-07-12 13:54:42.003053 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-12 13:54:42.003063 | orchestrator | Saturday 12 July 2025 13:54:29 +0000 (0:01:10.518) 0:02:32.635 ********* 2025-07-12 13:54:42.003072 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:54:42.003082 | orchestrator | 2025-07-12 13:54:42.003091 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-07-12 13:54:42.003101 | orchestrator | Saturday 12 July 2025 13:54:30 +0000 (0:00:00.696) 0:02:33.332 ********* 2025-07-12 13:54:42.003111 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:42.003120 | orchestrator | 2025-07-12 13:54:42.003130 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-07-12 13:54:42.003139 | orchestrator | Saturday 12 July 2025 13:54:33 +0000 (0:00:02.341) 0:02:35.674 ********* 2025-07-12 13:54:42.003149 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:42.003158 | orchestrator | 2025-07-12 13:54:42.003168 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-07-12 13:54:42.003184 | orchestrator | Saturday 12 July 2025 13:54:35 +0000 (0:00:02.187) 0:02:37.861 ********* 2025-07-12 13:54:42.003193 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:42.003203 | orchestrator | 2025-07-12 13:54:42.003217 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-07-12 13:54:42.003227 | orchestrator | Saturday 12 July 2025 13:54:37 +0000 (0:00:02.603) 0:02:40.465 ********* 2025-07-12 13:54:42.003236 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:42.003246 | orchestrator | 2025-07-12 13:54:42.003260 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:54:42.003272 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 13:54:42.003283 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 13:54:42.003293 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 13:54:42.003302 | orchestrator | 2025-07-12 13:54:42.003312 | orchestrator | 2025-07-12 13:54:42.003322 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:54:42.003331 | orchestrator | Saturday 12 July 2025 13:54:40 +0000 (0:00:02.364) 0:02:42.830 ********* 2025-07-12 13:54:42.003340 | orchestrator | =============================================================================== 2025-07-12 13:54:42.003350 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 70.52s 2025-07-12 13:54:42.003360 | orchestrator | opensearch : Restart opensearch container ------------------------------ 61.56s 2025-07-12 13:54:42.003369 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.87s 2025-07-12 13:54:42.003379 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.60s 2025-07-12 13:54:42.003388 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.51s 2025-07-12 13:54:42.003397 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.36s 2025-07-12 13:54:42.003407 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.34s 2025-07-12 13:54:42.003416 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.28s 2025-07-12 13:54:42.003426 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.19s 2025-07-12 13:54:42.003454 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.07s 2025-07-12 13:54:42.003464 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.84s 2025-07-12 13:54:42.003474 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.77s 2025-07-12 13:54:42.003484 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.37s 2025-07-12 13:54:42.003493 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.88s 2025-07-12 13:54:42.003503 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.70s 2025-07-12 13:54:42.003512 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.63s 2025-07-12 13:54:42.003521 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2025-07-12 13:54:42.003531 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2025-07-12 13:54:42.003540 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2025-07-12 13:54:42.003550 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.45s 2025-07-12 13:54:42.003559 | orchestrator | 2025-07-12 13:54:41 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:54:42.003841 | orchestrator | 2025-07-12 13:54:42 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:54:42.003870 | orchestrator | 2025-07-12 13:54:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:45.053754 | orchestrator | 2025-07-12 13:54:45 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:54:45.054219 | orchestrator | 2025-07-12 13:54:45 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:54:45.054376 | orchestrator | 2025-07-12 13:54:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:48.109888 | orchestrator | 2025-07-12 13:54:48 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:54:48.110167 | orchestrator | 2025-07-12 13:54:48 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:54:48.110187 | orchestrator | 2025-07-12 13:54:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:51.156590 | orchestrator | 2025-07-12 13:54:51 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:54:51.159471 | orchestrator | 2025-07-12 13:54:51 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:54:51.160122 | orchestrator | 2025-07-12 13:54:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:54.203147 | orchestrator | 2025-07-12 13:54:54 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:54:54.204865 | orchestrator | 2025-07-12 13:54:54 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:54:54.204906 | orchestrator | 2025-07-12 13:54:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:57.254472 | orchestrator | 2025-07-12 13:54:57 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:54:57.256569 | orchestrator | 2025-07-12 13:54:57 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:54:57.256650 | orchestrator | 2025-07-12 13:54:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:00.299584 | orchestrator | 2025-07-12 13:55:00 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:55:00.301962 | orchestrator | 2025-07-12 13:55:00 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:55:00.302170 | orchestrator | 2025-07-12 13:55:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:03.346905 | orchestrator | 2025-07-12 13:55:03 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:55:03.348128 | orchestrator | 2025-07-12 13:55:03 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:55:03.348164 | orchestrator | 2025-07-12 13:55:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:06.388808 | orchestrator | 2025-07-12 13:55:06 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:55:06.391135 | orchestrator | 2025-07-12 13:55:06 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:55:06.391255 | orchestrator | 2025-07-12 13:55:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:09.441099 | orchestrator | 2025-07-12 13:55:09 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:55:09.442517 | orchestrator | 2025-07-12 13:55:09 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:55:09.442551 | orchestrator | 2025-07-12 13:55:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:12.483495 | orchestrator | 2025-07-12 13:55:12 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:55:12.484644 | orchestrator | 2025-07-12 13:55:12 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state STARTED 2025-07-12 13:55:12.484784 | orchestrator | 2025-07-12 13:55:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:15.535040 | orchestrator | 2025-07-12 13:55:15 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:55:15.536111 | orchestrator | 2025-07-12 13:55:15 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:55:15.538219 | orchestrator | 2025-07-12 13:55:15 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:55:15.541663 | orchestrator | 2025-07-12 13:55:15 | INFO  | Task 0e87c983-059b-491f-9da8-90a87b4431d3 is in state SUCCESS 2025-07-12 13:55:15.543273 | orchestrator | 2025-07-12 13:55:15.543314 | orchestrator | 2025-07-12 13:55:15.543396 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-07-12 13:55:15.543439 | orchestrator | 2025-07-12 13:55:15.543453 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-07-12 13:55:15.543465 | orchestrator | Saturday 12 July 2025 13:51:57 +0000 (0:00:00.104) 0:00:00.104 ********* 2025-07-12 13:55:15.543476 | orchestrator | ok: [localhost] => { 2025-07-12 13:55:15.543489 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-07-12 13:55:15.543501 | orchestrator | } 2025-07-12 13:55:15.543682 | orchestrator | 2025-07-12 13:55:15.543699 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-07-12 13:55:15.543711 | orchestrator | Saturday 12 July 2025 13:51:57 +0000 (0:00:00.060) 0:00:00.165 ********* 2025-07-12 13:55:15.543722 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-07-12 13:55:15.543736 | orchestrator | ...ignoring 2025-07-12 13:55:15.543748 | orchestrator | 2025-07-12 13:55:15.543759 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-07-12 13:55:15.543770 | orchestrator | Saturday 12 July 2025 13:52:00 +0000 (0:00:02.848) 0:00:03.014 ********* 2025-07-12 13:55:15.543781 | orchestrator | skipping: [localhost] 2025-07-12 13:55:15.543791 | orchestrator | 2025-07-12 13:55:15.543802 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-07-12 13:55:15.543813 | orchestrator | Saturday 12 July 2025 13:52:00 +0000 (0:00:00.049) 0:00:03.064 ********* 2025-07-12 13:55:15.543823 | orchestrator | ok: [localhost] 2025-07-12 13:55:15.543834 | orchestrator | 2025-07-12 13:55:15.543844 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 13:55:15.543855 | orchestrator | 2025-07-12 13:55:15.543866 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 13:55:15.543877 | orchestrator | Saturday 12 July 2025 13:52:00 +0000 (0:00:00.144) 0:00:03.208 ********* 2025-07-12 13:55:15.543887 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:55:15.543920 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:55:15.543931 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:55:15.543941 | orchestrator | 2025-07-12 13:55:15.543952 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 13:55:15.543962 | orchestrator | Saturday 12 July 2025 13:52:00 +0000 (0:00:00.308) 0:00:03.516 ********* 2025-07-12 13:55:15.543973 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-07-12 13:55:15.543984 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-07-12 13:55:15.543995 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-07-12 13:55:15.544005 | orchestrator | 2025-07-12 13:55:15.544016 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-07-12 13:55:15.544026 | orchestrator | 2025-07-12 13:55:15.544037 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-07-12 13:55:15.544048 | orchestrator | Saturday 12 July 2025 13:52:01 +0000 (0:00:00.729) 0:00:04.246 ********* 2025-07-12 13:55:15.544097 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 13:55:15.544118 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-07-12 13:55:15.544137 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-07-12 13:55:15.544157 | orchestrator | 2025-07-12 13:55:15.544176 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-12 13:55:15.544194 | orchestrator | Saturday 12 July 2025 13:52:02 +0000 (0:00:00.451) 0:00:04.697 ********* 2025-07-12 13:55:15.544208 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:55:15.544220 | orchestrator | 2025-07-12 13:55:15.544230 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-07-12 13:55:15.544241 | orchestrator | Saturday 12 July 2025 13:52:02 +0000 (0:00:00.580) 0:00:05.278 ********* 2025-07-12 13:55:15.544273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 13:55:15.544298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 13:55:15.544323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 13:55:15.544338 | orchestrator | 2025-07-12 13:55:15.544360 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-07-12 13:55:15.544373 | orchestrator | Saturday 12 July 2025 13:52:06 +0000 (0:00:03.412) 0:00:08.691 ********* 2025-07-12 13:55:15.544385 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:15.544397 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:55:15.544453 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:15.544469 | orchestrator | 2025-07-12 13:55:15.544482 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-07-12 13:55:15.544494 | orchestrator | Saturday 12 July 2025 13:52:06 +0000 (0:00:00.670) 0:00:09.361 ********* 2025-07-12 13:55:15.544506 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:15.544518 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:15.544531 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:55:15.544543 | orchestrator | 2025-07-12 13:55:15.544556 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-07-12 13:55:15.544568 | orchestrator | Saturday 12 July 2025 13:52:08 +0000 (0:00:01.541) 0:00:10.902 ********* 2025-07-12 13:55:15.544599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 13:55:15.544633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 13:55:15.544652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 13:55:15.544673 | orchestrator | 2025-07-12 13:55:15.544685 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-07-12 13:55:15.544696 | orchestrator | Saturday 12 July 2025 13:52:13 +0000 (0:00:04.771) 0:00:15.674 ********* 2025-07-12 13:55:15.544707 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:15.544718 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:15.544729 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:55:15.544740 | orchestrator | 2025-07-12 13:55:15.544751 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-07-12 13:55:15.544762 | orchestrator | Saturday 12 July 2025 13:52:14 +0000 (0:00:01.167) 0:00:16.841 ********* 2025-07-12 13:55:15.544773 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:55:15.544784 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:55:15.544794 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:55:15.544805 | orchestrator | 2025-07-12 13:55:15.544816 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-12 13:55:15.544827 | orchestrator | Saturday 12 July 2025 13:52:18 +0000 (0:00:04.330) 0:00:21.172 ********* 2025-07-12 13:55:15.544838 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:55:15.544849 | orchestrator | 2025-07-12 13:55:15.544860 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-07-12 13:55:15.544871 | orchestrator | Saturday 12 July 2025 13:52:19 +0000 (0:00:01.025) 0:00:22.198 ********* 2025-07-12 13:55:15.544892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 13:55:15.544914 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:15.544932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 13:55:15.544944 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:55:15.544965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 13:55:15.544986 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:15.544997 | orchestrator | 2025-07-12 13:55:15.545009 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-07-12 13:55:15.545019 | orchestrator | Saturday 12 July 2025 13:52:22 +0000 (0:00:03.087) 0:00:25.285 ********* 2025-07-12 13:55:15.545036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 13:55:15.545049 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:15.545069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 13:55:15.545096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 13:55:15.545109 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:15.545120 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:55:15.545138 | orchestrator | 2025-07-12 13:55:15.545157 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-07-12 13:55:15.545176 | orchestrator | Saturday 12 July 2025 13:52:25 +0000 (0:00:03.040) 0:00:28.326 ********* 2025-07-12 13:55:15.545207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 13:55:15.545238 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:55:15.545258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 13:55:15.545270 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:15.545281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 13:55:15.545293 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:15.545303 | orchestrator | 2025-07-12 13:55:15.545314 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-07-12 13:55:15.545332 | orchestrator | Saturday 12 July 2025 13:52:29 +0000 (0:00:03.527) 0:00:31.854 ********* 2025-07-12 13:55:15.545357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 13:55:15.545371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 13:55:15.545392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 13:55:15.545440 | orchestrator | 2025-07-12 13:55:15.545461 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-07-12 13:55:15.545472 | orchestrator | Saturday 12 July 2025 13:52:32 +0000 (0:00:03.800) 0:00:35.655 ********* 2025-07-12 13:55:15.545483 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:55:15.545494 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:55:15.545504 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:55:15.545514 | orchestrator | 2025-07-12 13:55:15.545525 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-07-12 13:55:15.545535 | orchestrator | Saturday 12 July 2025 13:52:34 +0000 (0:00:01.313) 0:00:36.969 ********* 2025-07-12 13:55:15.545546 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:55:15.545557 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:55:15.545567 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:55:15.545578 | orchestrator | 2025-07-12 13:55:15.545588 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-07-12 13:55:15.545599 | orchestrator | Saturday 12 July 2025 13:52:34 +0000 (0:00:00.375) 0:00:37.344 ********* 2025-07-12 13:55:15.545609 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:55:15.545620 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:55:15.545630 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:55:15.545640 | orchestrator | 2025-07-12 13:55:15.545651 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-07-12 13:55:15.545662 | orchestrator | Saturday 12 July 2025 13:52:34 +0000 (0:00:00.316) 0:00:37.661 ********* 2025-07-12 13:55:15.545673 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-07-12 13:55:15.545684 | orchestrator | ...ignoring 2025-07-12 13:55:15.545695 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-07-12 13:55:15.545706 | orchestrator | ...ignoring 2025-07-12 13:55:15.545716 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-07-12 13:55:15.545727 | orchestrator | ...ignoring 2025-07-12 13:55:15.545737 | orchestrator | 2025-07-12 13:55:15.545748 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-07-12 13:55:15.545767 | orchestrator | Saturday 12 July 2025 13:52:46 +0000 (0:00:11.102) 0:00:48.763 ********* 2025-07-12 13:55:15.545778 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:55:15.545788 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:55:15.545799 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:55:15.545809 | orchestrator | 2025-07-12 13:55:15.545820 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-07-12 13:55:15.545830 | orchestrator | Saturday 12 July 2025 13:52:46 +0000 (0:00:00.663) 0:00:49.426 ********* 2025-07-12 13:55:15.545841 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:55:15.545851 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:15.545862 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:15.545872 | orchestrator | 2025-07-12 13:55:15.545883 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-07-12 13:55:15.545893 | orchestrator | Saturday 12 July 2025 13:52:47 +0000 (0:00:00.426) 0:00:49.853 ********* 2025-07-12 13:55:15.545904 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:55:15.545914 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:15.545925 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:15.545935 | orchestrator | 2025-07-12 13:55:15.545946 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-07-12 13:55:15.545956 | orchestrator | Saturday 12 July 2025 13:52:47 +0000 (0:00:00.428) 0:00:50.282 ********* 2025-07-12 13:55:15.545967 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:55:15.545977 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:15.545988 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:15.545998 | orchestrator | 2025-07-12 13:55:15.546009 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-07-12 13:55:15.546080 | orchestrator | Saturday 12 July 2025 13:52:48 +0000 (0:00:00.496) 0:00:50.779 ********* 2025-07-12 13:55:15.546093 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:55:15.546103 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:55:15.546114 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:55:15.546124 | orchestrator | 2025-07-12 13:55:15.546135 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-07-12 13:55:15.546145 | orchestrator | Saturday 12 July 2025 13:52:48 +0000 (0:00:00.661) 0:00:51.440 ********* 2025-07-12 13:55:15.546156 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:55:15.546172 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:15.546191 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:15.546210 | orchestrator | 2025-07-12 13:55:15.546230 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-12 13:55:15.546251 | orchestrator | Saturday 12 July 2025 13:52:49 +0000 (0:00:00.473) 0:00:51.914 ********* 2025-07-12 13:55:15.546270 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:15.546285 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:15.546296 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-07-12 13:55:15.546307 | orchestrator | 2025-07-12 13:55:15.546317 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-07-12 13:55:15.546328 | orchestrator | Saturday 12 July 2025 13:52:49 +0000 (0:00:00.380) 0:00:52.295 ********* 2025-07-12 13:55:15.546338 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:55:15.546349 | orchestrator | 2025-07-12 13:55:15.546359 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-07-12 13:55:15.546370 | orchestrator | Saturday 12 July 2025 13:53:00 +0000 (0:00:10.921) 0:01:03.217 ********* 2025-07-12 13:55:15.546380 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:55:15.546391 | orchestrator | 2025-07-12 13:55:15.546402 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-12 13:55:15.546473 | orchestrator | Saturday 12 July 2025 13:53:00 +0000 (0:00:00.129) 0:01:03.346 ********* 2025-07-12 13:55:15.546486 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:55:15.546497 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:15.546514 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:15.546535 | orchestrator | 2025-07-12 13:55:15.546546 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-07-12 13:55:15.546557 | orchestrator | Saturday 12 July 2025 13:53:01 +0000 (0:00:01.005) 0:01:04.351 ********* 2025-07-12 13:55:15.546567 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:55:15.546578 | orchestrator | 2025-07-12 13:55:15.546588 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-07-12 13:55:15.546599 | orchestrator | Saturday 12 July 2025 13:53:09 +0000 (0:00:07.975) 0:01:12.327 ********* 2025-07-12 13:55:15.546609 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:55:15.546619 | orchestrator | 2025-07-12 13:55:15.546630 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-07-12 13:55:15.546640 | orchestrator | Saturday 12 July 2025 13:53:11 +0000 (0:00:01.621) 0:01:13.949 ********* 2025-07-12 13:55:15.546651 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:55:15.546661 | orchestrator | 2025-07-12 13:55:15.546672 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-07-12 13:55:15.546683 | orchestrator | Saturday 12 July 2025 13:53:13 +0000 (0:00:02.642) 0:01:16.591 ********* 2025-07-12 13:55:15.546693 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:55:15.546704 | orchestrator | 2025-07-12 13:55:15.546714 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-07-12 13:55:15.546725 | orchestrator | Saturday 12 July 2025 13:53:14 +0000 (0:00:00.119) 0:01:16.711 ********* 2025-07-12 13:55:15.546735 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:55:15.546746 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:15.546756 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:15.546766 | orchestrator | 2025-07-12 13:55:15.546777 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-07-12 13:55:15.546787 | orchestrator | Saturday 12 July 2025 13:53:14 +0000 (0:00:00.511) 0:01:17.223 ********* 2025-07-12 13:55:15.546798 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:55:15.546808 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-07-12 13:55:15.546819 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:55:15.546829 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:55:15.546839 | orchestrator | 2025-07-12 13:55:15.546850 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-07-12 13:55:15.546861 | orchestrator | skipping: no hosts matched 2025-07-12 13:55:15.546871 | orchestrator | 2025-07-12 13:55:15.546882 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-07-12 13:55:15.546892 | orchestrator | 2025-07-12 13:55:15.546903 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-07-12 13:55:15.546913 | orchestrator | Saturday 12 July 2025 13:53:14 +0000 (0:00:00.339) 0:01:17.563 ********* 2025-07-12 13:55:15.546923 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:55:15.546934 | orchestrator | 2025-07-12 13:55:15.546944 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-07-12 13:55:15.546954 | orchestrator | Saturday 12 July 2025 13:53:35 +0000 (0:00:20.664) 0:01:38.227 ********* 2025-07-12 13:55:15.546965 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:55:15.546975 | orchestrator | 2025-07-12 13:55:15.546986 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-07-12 13:55:15.546996 | orchestrator | Saturday 12 July 2025 13:53:56 +0000 (0:00:20.607) 0:01:58.835 ********* 2025-07-12 13:55:15.547007 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:55:15.547016 | orchestrator | 2025-07-12 13:55:15.547026 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-07-12 13:55:15.547035 | orchestrator | 2025-07-12 13:55:15.547044 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-07-12 13:55:15.547054 | orchestrator | Saturday 12 July 2025 13:53:58 +0000 (0:00:02.458) 0:02:01.293 ********* 2025-07-12 13:55:15.547063 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:55:15.547072 | orchestrator | 2025-07-12 13:55:15.547082 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-07-12 13:55:15.547105 | orchestrator | Saturday 12 July 2025 13:54:18 +0000 (0:00:19.844) 0:02:21.137 ********* 2025-07-12 13:55:15.547115 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:55:15.547124 | orchestrator | 2025-07-12 13:55:15.547134 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-07-12 13:55:15.547143 | orchestrator | Saturday 12 July 2025 13:54:39 +0000 (0:00:20.554) 0:02:41.691 ********* 2025-07-12 13:55:15.547152 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:55:15.547162 | orchestrator | 2025-07-12 13:55:15.547171 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-07-12 13:55:15.547181 | orchestrator | 2025-07-12 13:55:15.547190 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-07-12 13:55:15.547200 | orchestrator | Saturday 12 July 2025 13:54:41 +0000 (0:00:02.841) 0:02:44.533 ********* 2025-07-12 13:55:15.547214 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:55:15.547230 | orchestrator | 2025-07-12 13:55:15.547248 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-07-12 13:55:15.547266 | orchestrator | Saturday 12 July 2025 13:54:53 +0000 (0:00:11.788) 0:02:56.321 ********* 2025-07-12 13:55:15.547277 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:55:15.547287 | orchestrator | 2025-07-12 13:55:15.547297 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-07-12 13:55:15.547306 | orchestrator | Saturday 12 July 2025 13:54:58 +0000 (0:00:04.610) 0:03:00.931 ********* 2025-07-12 13:55:15.547316 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:55:15.547325 | orchestrator | 2025-07-12 13:55:15.547334 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-07-12 13:55:15.547344 | orchestrator | 2025-07-12 13:55:15.547353 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-07-12 13:55:15.547362 | orchestrator | Saturday 12 July 2025 13:55:00 +0000 (0:00:02.402) 0:03:03.333 ********* 2025-07-12 13:55:15.547371 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:55:15.547381 | orchestrator | 2025-07-12 13:55:15.547390 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-07-12 13:55:15.547400 | orchestrator | Saturday 12 July 2025 13:55:01 +0000 (0:00:00.549) 0:03:03.883 ********* 2025-07-12 13:55:15.547466 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:15.547479 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:15.547489 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:55:15.547499 | orchestrator | 2025-07-12 13:55:15.547508 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-07-12 13:55:15.547517 | orchestrator | Saturday 12 July 2025 13:55:03 +0000 (0:00:02.595) 0:03:06.478 ********* 2025-07-12 13:55:15.547526 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:15.547536 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:15.547545 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:55:15.547554 | orchestrator | 2025-07-12 13:55:15.547564 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-07-12 13:55:15.547573 | orchestrator | Saturday 12 July 2025 13:55:05 +0000 (0:00:02.025) 0:03:08.504 ********* 2025-07-12 13:55:15.547582 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:15.547592 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:15.547601 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:55:15.547610 | orchestrator | 2025-07-12 13:55:15.547620 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-07-12 13:55:15.547629 | orchestrator | Saturday 12 July 2025 13:55:07 +0000 (0:00:02.058) 0:03:10.562 ********* 2025-07-12 13:55:15.547638 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:15.547648 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:15.547657 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:55:15.547666 | orchestrator | 2025-07-12 13:55:15.547675 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-07-12 13:55:15.547685 | orchestrator | Saturday 12 July 2025 13:55:09 +0000 (0:00:02.012) 0:03:12.574 ********* 2025-07-12 13:55:15.547702 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:55:15.547712 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:55:15.547721 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:55:15.547730 | orchestrator | 2025-07-12 13:55:15.547737 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-07-12 13:55:15.547745 | orchestrator | Saturday 12 July 2025 13:55:12 +0000 (0:00:02.960) 0:03:15.535 ********* 2025-07-12 13:55:15.547753 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:55:15.547760 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:15.547768 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:15.547776 | orchestrator | 2025-07-12 13:55:15.547783 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:55:15.547791 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-07-12 13:55:15.547799 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-07-12 13:55:15.547809 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-07-12 13:55:15.547816 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-07-12 13:55:15.547824 | orchestrator | 2025-07-12 13:55:15.547832 | orchestrator | 2025-07-12 13:55:15.547840 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:55:15.547847 | orchestrator | Saturday 12 July 2025 13:55:13 +0000 (0:00:00.235) 0:03:15.770 ********* 2025-07-12 13:55:15.547855 | orchestrator | =============================================================================== 2025-07-12 13:55:15.547862 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.16s 2025-07-12 13:55:15.547870 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 40.51s 2025-07-12 13:55:15.547883 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.79s 2025-07-12 13:55:15.547891 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.10s 2025-07-12 13:55:15.547899 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.92s 2025-07-12 13:55:15.547906 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.98s 2025-07-12 13:55:15.547914 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.30s 2025-07-12 13:55:15.547921 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.77s 2025-07-12 13:55:15.547929 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.61s 2025-07-12 13:55:15.547937 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.33s 2025-07-12 13:55:15.547944 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.80s 2025-07-12 13:55:15.547952 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.53s 2025-07-12 13:55:15.547959 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.41s 2025-07-12 13:55:15.547967 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.09s 2025-07-12 13:55:15.547975 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.04s 2025-07-12 13:55:15.547982 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.96s 2025-07-12 13:55:15.547990 | orchestrator | Check MariaDB service --------------------------------------------------- 2.85s 2025-07-12 13:55:15.547997 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.64s 2025-07-12 13:55:15.548005 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.60s 2025-07-12 13:55:15.548018 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.40s 2025-07-12 13:55:15.548030 | orchestrator | 2025-07-12 13:55:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:18.592562 | orchestrator | 2025-07-12 13:55:18 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:55:18.594484 | orchestrator | 2025-07-12 13:55:18 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:55:18.595831 | orchestrator | 2025-07-12 13:55:18 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:55:18.595868 | orchestrator | 2025-07-12 13:55:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:21.640649 | orchestrator | 2025-07-12 13:55:21 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:55:21.643326 | orchestrator | 2025-07-12 13:55:21 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:55:21.645033 | orchestrator | 2025-07-12 13:55:21 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:55:21.645060 | orchestrator | 2025-07-12 13:55:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:24.696910 | orchestrator | 2025-07-12 13:55:24 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:55:24.699722 | orchestrator | 2025-07-12 13:55:24 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:55:24.700607 | orchestrator | 2025-07-12 13:55:24 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:55:24.700891 | orchestrator | 2025-07-12 13:55:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:27.743532 | orchestrator | 2025-07-12 13:55:27 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:55:27.745086 | orchestrator | 2025-07-12 13:55:27 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:55:27.746827 | orchestrator | 2025-07-12 13:55:27 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:55:27.746856 | orchestrator | 2025-07-12 13:55:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:30.785222 | orchestrator | 2025-07-12 13:55:30 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:55:30.786355 | orchestrator | 2025-07-12 13:55:30 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:55:30.788448 | orchestrator | 2025-07-12 13:55:30 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:55:30.788841 | orchestrator | 2025-07-12 13:55:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:33.826500 | orchestrator | 2025-07-12 13:55:33 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:55:33.828425 | orchestrator | 2025-07-12 13:55:33 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:55:33.829331 | orchestrator | 2025-07-12 13:55:33 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:55:33.829482 | orchestrator | 2025-07-12 13:55:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:36.865306 | orchestrator | 2025-07-12 13:55:36 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:55:36.865932 | orchestrator | 2025-07-12 13:55:36 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:55:36.866769 | orchestrator | 2025-07-12 13:55:36 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:55:36.867704 | orchestrator | 2025-07-12 13:55:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:39.910336 | orchestrator | 2025-07-12 13:55:39 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:55:39.911948 | orchestrator | 2025-07-12 13:55:39 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:55:39.914351 | orchestrator | 2025-07-12 13:55:39 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:55:39.914380 | orchestrator | 2025-07-12 13:55:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:42.947368 | orchestrator | 2025-07-12 13:55:42 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:55:42.947892 | orchestrator | 2025-07-12 13:55:42 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:55:42.950628 | orchestrator | 2025-07-12 13:55:42 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:55:42.950669 | orchestrator | 2025-07-12 13:55:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:45.995880 | orchestrator | 2025-07-12 13:55:45 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:55:45.996712 | orchestrator | 2025-07-12 13:55:45 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:55:46.001334 | orchestrator | 2025-07-12 13:55:45 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:55:46.001740 | orchestrator | 2025-07-12 13:55:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:49.045615 | orchestrator | 2025-07-12 13:55:49 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:55:49.046236 | orchestrator | 2025-07-12 13:55:49 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:55:49.047352 | orchestrator | 2025-07-12 13:55:49 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:55:49.047710 | orchestrator | 2025-07-12 13:55:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:52.086951 | orchestrator | 2025-07-12 13:55:52 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:55:52.089282 | orchestrator | 2025-07-12 13:55:52 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:55:52.090994 | orchestrator | 2025-07-12 13:55:52 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:55:52.091049 | orchestrator | 2025-07-12 13:55:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:55.124136 | orchestrator | 2025-07-12 13:55:55 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:55:55.125811 | orchestrator | 2025-07-12 13:55:55 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:55:55.127559 | orchestrator | 2025-07-12 13:55:55 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:55:55.127603 | orchestrator | 2025-07-12 13:55:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:58.177817 | orchestrator | 2025-07-12 13:55:58 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:55:58.178870 | orchestrator | 2025-07-12 13:55:58 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:55:58.180654 | orchestrator | 2025-07-12 13:55:58 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:55:58.180744 | orchestrator | 2025-07-12 13:55:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:01.229147 | orchestrator | 2025-07-12 13:56:01 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:56:01.230939 | orchestrator | 2025-07-12 13:56:01 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:56:01.232656 | orchestrator | 2025-07-12 13:56:01 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:56:01.233091 | orchestrator | 2025-07-12 13:56:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:04.279482 | orchestrator | 2025-07-12 13:56:04 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:56:04.279746 | orchestrator | 2025-07-12 13:56:04 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:56:04.280826 | orchestrator | 2025-07-12 13:56:04 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:56:04.280855 | orchestrator | 2025-07-12 13:56:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:07.326298 | orchestrator | 2025-07-12 13:56:07 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:56:07.327206 | orchestrator | 2025-07-12 13:56:07 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:56:07.328655 | orchestrator | 2025-07-12 13:56:07 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:56:07.328966 | orchestrator | 2025-07-12 13:56:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:10.374253 | orchestrator | 2025-07-12 13:56:10 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:56:10.376474 | orchestrator | 2025-07-12 13:56:10 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:56:10.380687 | orchestrator | 2025-07-12 13:56:10 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:56:10.380992 | orchestrator | 2025-07-12 13:56:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:13.442765 | orchestrator | 2025-07-12 13:56:13 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:56:13.445506 | orchestrator | 2025-07-12 13:56:13 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:56:13.447796 | orchestrator | 2025-07-12 13:56:13 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:56:13.448022 | orchestrator | 2025-07-12 13:56:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:16.493504 | orchestrator | 2025-07-12 13:56:16 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:56:16.496033 | orchestrator | 2025-07-12 13:56:16 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:56:16.498118 | orchestrator | 2025-07-12 13:56:16 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:56:16.498157 | orchestrator | 2025-07-12 13:56:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:19.547100 | orchestrator | 2025-07-12 13:56:19 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:56:19.549124 | orchestrator | 2025-07-12 13:56:19 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:56:19.553837 | orchestrator | 2025-07-12 13:56:19 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:56:19.553885 | orchestrator | 2025-07-12 13:56:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:22.593776 | orchestrator | 2025-07-12 13:56:22 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:56:22.595998 | orchestrator | 2025-07-12 13:56:22 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:56:22.597911 | orchestrator | 2025-07-12 13:56:22 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:56:22.597961 | orchestrator | 2025-07-12 13:56:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:25.647546 | orchestrator | 2025-07-12 13:56:25 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:56:25.648812 | orchestrator | 2025-07-12 13:56:25 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:56:25.650766 | orchestrator | 2025-07-12 13:56:25 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:56:25.650791 | orchestrator | 2025-07-12 13:56:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:28.687780 | orchestrator | 2025-07-12 13:56:28 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:56:28.688998 | orchestrator | 2025-07-12 13:56:28 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:56:28.690640 | orchestrator | 2025-07-12 13:56:28 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:56:28.691017 | orchestrator | 2025-07-12 13:56:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:31.739160 | orchestrator | 2025-07-12 13:56:31 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:56:31.741854 | orchestrator | 2025-07-12 13:56:31 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:56:31.741885 | orchestrator | 2025-07-12 13:56:31 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:56:31.741897 | orchestrator | 2025-07-12 13:56:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:34.781259 | orchestrator | 2025-07-12 13:56:34 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:56:34.783541 | orchestrator | 2025-07-12 13:56:34 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:56:34.785385 | orchestrator | 2025-07-12 13:56:34 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state STARTED 2025-07-12 13:56:34.785416 | orchestrator | 2025-07-12 13:56:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:37.835954 | orchestrator | 2025-07-12 13:56:37 | INFO  | Task bb94a130-a8be-4cf2-ab8e-23c1d451ae35 is in state STARTED 2025-07-12 13:56:37.838975 | orchestrator | 2025-07-12 13:56:37 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:56:37.841195 | orchestrator | 2025-07-12 13:56:37 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:56:37.845094 | orchestrator | 2025-07-12 13:56:37 | INFO  | Task 2d14dcd1-52e8-4c63-863a-5f0ec55fae02 is in state SUCCESS 2025-07-12 13:56:37.847835 | orchestrator | 2025-07-12 13:56:37.847917 | orchestrator | 2025-07-12 13:56:37.848038 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-07-12 13:56:37.848052 | orchestrator | 2025-07-12 13:56:37.848063 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-07-12 13:56:37.848075 | orchestrator | Saturday 12 July 2025 13:54:30 +0000 (0:00:00.618) 0:00:00.618 ********* 2025-07-12 13:56:37.848086 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:56:37.848098 | orchestrator | 2025-07-12 13:56:37.848459 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-07-12 13:56:37.848505 | orchestrator | Saturday 12 July 2025 13:54:31 +0000 (0:00:00.669) 0:00:01.287 ********* 2025-07-12 13:56:37.848517 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:56:37.848529 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:56:37.848540 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:56:37.848551 | orchestrator | 2025-07-12 13:56:37.848561 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-07-12 13:56:37.848572 | orchestrator | Saturday 12 July 2025 13:54:31 +0000 (0:00:00.629) 0:00:01.917 ********* 2025-07-12 13:56:37.848583 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:56:37.848593 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:56:37.848603 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:56:37.848614 | orchestrator | 2025-07-12 13:56:37.848625 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-07-12 13:56:37.848635 | orchestrator | Saturday 12 July 2025 13:54:32 +0000 (0:00:00.273) 0:00:02.191 ********* 2025-07-12 13:56:37.848646 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:56:37.848885 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:56:37.848898 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:56:37.848909 | orchestrator | 2025-07-12 13:56:37.848920 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-07-12 13:56:37.848931 | orchestrator | Saturday 12 July 2025 13:54:32 +0000 (0:00:00.777) 0:00:02.968 ********* 2025-07-12 13:56:37.848942 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:56:37.848952 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:56:37.848963 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:56:37.848973 | orchestrator | 2025-07-12 13:56:37.848984 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-07-12 13:56:37.848995 | orchestrator | Saturday 12 July 2025 13:54:33 +0000 (0:00:00.287) 0:00:03.256 ********* 2025-07-12 13:56:37.849005 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:56:37.849016 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:56:37.849026 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:56:37.849037 | orchestrator | 2025-07-12 13:56:37.849047 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-07-12 13:56:37.849058 | orchestrator | Saturday 12 July 2025 13:54:33 +0000 (0:00:00.285) 0:00:03.541 ********* 2025-07-12 13:56:37.849069 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:56:37.849079 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:56:37.849090 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:56:37.849100 | orchestrator | 2025-07-12 13:56:37.849111 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-07-12 13:56:37.849122 | orchestrator | Saturday 12 July 2025 13:54:33 +0000 (0:00:00.291) 0:00:03.832 ********* 2025-07-12 13:56:37.849132 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:37.849144 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:37.849154 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:37.849165 | orchestrator | 2025-07-12 13:56:37.849175 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-07-12 13:56:37.849186 | orchestrator | Saturday 12 July 2025 13:54:34 +0000 (0:00:00.524) 0:00:04.356 ********* 2025-07-12 13:56:37.849196 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:56:37.849207 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:56:37.849217 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:56:37.849228 | orchestrator | 2025-07-12 13:56:37.849239 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-07-12 13:56:37.849250 | orchestrator | Saturday 12 July 2025 13:54:34 +0000 (0:00:00.325) 0:00:04.681 ********* 2025-07-12 13:56:37.849261 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-12 13:56:37.849272 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 13:56:37.849282 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 13:56:37.849293 | orchestrator | 2025-07-12 13:56:37.849303 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-07-12 13:56:37.849323 | orchestrator | Saturday 12 July 2025 13:54:35 +0000 (0:00:00.633) 0:00:05.314 ********* 2025-07-12 13:56:37.849333 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:56:37.849344 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:56:37.849354 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:56:37.849388 | orchestrator | 2025-07-12 13:56:37.849399 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-07-12 13:56:37.849410 | orchestrator | Saturday 12 July 2025 13:54:35 +0000 (0:00:00.443) 0:00:05.757 ********* 2025-07-12 13:56:37.849421 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-12 13:56:37.849431 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 13:56:37.849442 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 13:56:37.849452 | orchestrator | 2025-07-12 13:56:37.849463 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-07-12 13:56:37.849473 | orchestrator | Saturday 12 July 2025 13:54:37 +0000 (0:00:02.053) 0:00:07.811 ********* 2025-07-12 13:56:37.849484 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-12 13:56:37.849494 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-12 13:56:37.849505 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-12 13:56:37.849515 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:37.849526 | orchestrator | 2025-07-12 13:56:37.849546 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-07-12 13:56:37.849597 | orchestrator | Saturday 12 July 2025 13:54:38 +0000 (0:00:00.396) 0:00:08.207 ********* 2025-07-12 13:56:37.849612 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.849626 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.849638 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.849648 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:37.849659 | orchestrator | 2025-07-12 13:56:37.849670 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-07-12 13:56:37.849680 | orchestrator | Saturday 12 July 2025 13:54:38 +0000 (0:00:00.768) 0:00:08.976 ********* 2025-07-12 13:56:37.849693 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.849706 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.849718 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.849736 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:37.849747 | orchestrator | 2025-07-12 13:56:37.849758 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-07-12 13:56:37.849769 | orchestrator | Saturday 12 July 2025 13:54:39 +0000 (0:00:00.165) 0:00:09.141 ********* 2025-07-12 13:56:37.849782 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'faa4cf40665e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-07-12 13:54:36.265101', 'end': '2025-07-12 13:54:36.312091', 'delta': '0:00:00.046990', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['faa4cf40665e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-07-12 13:56:37.849798 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c920b1fc011a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-07-12 13:54:36.998554', 'end': '2025-07-12 13:54:37.037399', 'delta': '0:00:00.038845', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c920b1fc011a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-07-12 13:56:37.849846 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '32b9be9ee837', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-07-12 13:54:37.517417', 'end': '2025-07-12 13:54:37.565258', 'delta': '0:00:00.047841', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['32b9be9ee837'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-07-12 13:56:37.849860 | orchestrator | 2025-07-12 13:56:37.849871 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-07-12 13:56:37.849881 | orchestrator | Saturday 12 July 2025 13:54:39 +0000 (0:00:00.386) 0:00:09.527 ********* 2025-07-12 13:56:37.849892 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:56:37.849903 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:56:37.849913 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:56:37.849924 | orchestrator | 2025-07-12 13:56:37.849935 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-07-12 13:56:37.849945 | orchestrator | Saturday 12 July 2025 13:54:39 +0000 (0:00:00.425) 0:00:09.953 ********* 2025-07-12 13:56:37.849956 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-07-12 13:56:37.849967 | orchestrator | 2025-07-12 13:56:37.849977 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-07-12 13:56:37.849988 | orchestrator | Saturday 12 July 2025 13:54:41 +0000 (0:00:01.744) 0:00:11.697 ********* 2025-07-12 13:56:37.849998 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:37.850009 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:37.850070 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:37.850089 | orchestrator | 2025-07-12 13:56:37.850100 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-07-12 13:56:37.850112 | orchestrator | Saturday 12 July 2025 13:54:41 +0000 (0:00:00.298) 0:00:11.996 ********* 2025-07-12 13:56:37.850130 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:37.850148 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:37.850166 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:37.850183 | orchestrator | 2025-07-12 13:56:37.850200 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-12 13:56:37.850219 | orchestrator | Saturday 12 July 2025 13:54:42 +0000 (0:00:00.407) 0:00:12.404 ********* 2025-07-12 13:56:37.850236 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:37.850255 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:37.850273 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:37.850292 | orchestrator | 2025-07-12 13:56:37.850310 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-07-12 13:56:37.850327 | orchestrator | Saturday 12 July 2025 13:54:42 +0000 (0:00:00.473) 0:00:12.877 ********* 2025-07-12 13:56:37.850338 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:56:37.850348 | orchestrator | 2025-07-12 13:56:37.850406 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-07-12 13:56:37.850418 | orchestrator | Saturday 12 July 2025 13:54:42 +0000 (0:00:00.140) 0:00:13.018 ********* 2025-07-12 13:56:37.850429 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:37.850440 | orchestrator | 2025-07-12 13:56:37.850450 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-12 13:56:37.850461 | orchestrator | Saturday 12 July 2025 13:54:43 +0000 (0:00:00.215) 0:00:13.233 ********* 2025-07-12 13:56:37.850471 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:37.850481 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:37.850492 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:37.850503 | orchestrator | 2025-07-12 13:56:37.850513 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-07-12 13:56:37.850524 | orchestrator | Saturday 12 July 2025 13:54:43 +0000 (0:00:00.299) 0:00:13.533 ********* 2025-07-12 13:56:37.850535 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:37.850545 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:37.850555 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:37.850566 | orchestrator | 2025-07-12 13:56:37.850576 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-07-12 13:56:37.850587 | orchestrator | Saturday 12 July 2025 13:54:43 +0000 (0:00:00.327) 0:00:13.861 ********* 2025-07-12 13:56:37.850598 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:37.850608 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:37.850619 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:37.850629 | orchestrator | 2025-07-12 13:56:37.850639 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-07-12 13:56:37.850650 | orchestrator | Saturday 12 July 2025 13:54:44 +0000 (0:00:00.538) 0:00:14.399 ********* 2025-07-12 13:56:37.850660 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:37.850671 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:37.850681 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:37.850692 | orchestrator | 2025-07-12 13:56:37.850702 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-07-12 13:56:37.850713 | orchestrator | Saturday 12 July 2025 13:54:44 +0000 (0:00:00.314) 0:00:14.714 ********* 2025-07-12 13:56:37.850723 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:37.850734 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:37.850744 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:37.850755 | orchestrator | 2025-07-12 13:56:37.850765 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-07-12 13:56:37.850776 | orchestrator | Saturday 12 July 2025 13:54:44 +0000 (0:00:00.316) 0:00:15.030 ********* 2025-07-12 13:56:37.850786 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:37.850806 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:37.850823 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:37.850834 | orchestrator | 2025-07-12 13:56:37.850845 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-07-12 13:56:37.850900 | orchestrator | Saturday 12 July 2025 13:54:45 +0000 (0:00:00.308) 0:00:15.339 ********* 2025-07-12 13:56:37.850912 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:37.850923 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:37.850933 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:37.850944 | orchestrator | 2025-07-12 13:56:37.850954 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-07-12 13:56:37.850965 | orchestrator | Saturday 12 July 2025 13:54:45 +0000 (0:00:00.501) 0:00:15.841 ********* 2025-07-12 13:56:37.850977 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f86cb3d6--0e78--5b6a--8369--843476bf59dc-osd--block--f86cb3d6--0e78--5b6a--8369--843476bf59dc', 'dm-uuid-LVM-cOowIwi4ngbGyp4J1ONZ0QCO9jALxi4Uq1QblHIlw69fQDfMDIPDIfIejgLGHClo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:37.850990 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8c07aa4b--79b5--5c8f--bb7a--3f1e0dfe1f2a-osd--block--8c07aa4b--79b5--5c8f--bb7a--3f1e0dfe1f2a', 'dm-uuid-LVM-ryVpm1vZGfdej5YU7k2fce5rcubHgJ30K2EonOshBiJNmKQEiOlu6ex7QlfnrVEr'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:37.851002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:37.851014 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:37.851025 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:37.851036 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:37.851047 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:37.851100 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:37.851114 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:37.851125 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:37.851139 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70', 'scsi-SQEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70-part1', 'scsi-SQEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70-part14', 'scsi-SQEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70-part15', 'scsi-SQEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70-part16', 'scsi-SQEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:56:37.851153 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f86cb3d6--0e78--5b6a--8369--843476bf59dc-osd--block--f86cb3d6--0e78--5b6a--8369--843476bf59dc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JiGeg9-HB3s-KxsR-vAVJ-z7Up-5NSC-twJAJp', 'scsi-0QEMU_QEMU_HARDDISK_cf6824d0-2336-4864-a32f-bffef7606523', 'scsi-SQEMU_QEMU_HARDDISK_cf6824d0-2336-4864-a32f-bffef7606523'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:56:37.851206 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8c07aa4b--79b5--5c8f--bb7a--3f1e0dfe1f2a-osd--block--8c07aa4b--79b5--5c8f--bb7a--3f1e0dfe1f2a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vVYPqR-aOTR-bKmD-EWSo-w8X6-HkbQ-6enrQh', 'scsi-0QEMU_QEMU_HARDDISK_bad1a367-9870-4c1b-af18-4999b26662c8', 'scsi-SQEMU_QEMU_HARDDISK_bad1a367-9870-4c1b-af18-4999b26662c8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:56:37.851221 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec46bf14-c827-46d0-9a8c-19525aeacad6', 'scsi-SQEMU_QEMU_HARDDISK_ec46bf14-c827-46d0-9a8c-19525aeacad6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:56:37.851232 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8be3c046--75c4--5df6--b59b--0076bb3a4ccd-osd--block--8be3c046--75c4--5df6--b59b--0076bb3a4ccd', 'dm-uuid-LVM-eNKlWRslYY1LPS1Lsl2a1zcjZPSHC2eEwz1DWsQznfBgygDIVbtNzXLvhZOCsm1i'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:37.851243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:56:37.851255 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f8ec8ce8--a083--5a5f--ae06--780cf5acbe42-osd--block--f8ec8ce8--a083--5a5f--ae06--780cf5acbe42', 'dm-uuid-LVM-iP66jRhrYjcnf87yxq9NTie5JOPBTSKdfp6rB9Iyv3FK2fgoUAfj6YBcRbmD3h7y'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:37.851266 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:37.851284 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:37.851333 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:37.851347 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:37.851411 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:37.851425 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:37.851436 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:37.851447 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:37.851457 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:37.851510 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1', 'scsi-SQEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1-part1', 'scsi-SQEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1-part14', 'scsi-SQEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1-part15', 'scsi-SQEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1-part16', 'scsi-SQEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:56:37.851534 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8be3c046--75c4--5df6--b59b--0076bb3a4ccd-osd--block--8be3c046--75c4--5df6--b59b--0076bb3a4ccd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9521cP-6yAn-ReSb-qmNR-WXii-Wgkw-QXC1e3', 'scsi-0QEMU_QEMU_HARDDISK_b303d5ed-b20f-4882-90f3-23adead236a1', 'scsi-SQEMU_QEMU_HARDDISK_b303d5ed-b20f-4882-90f3-23adead236a1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:56:37.851546 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f8ec8ce8--a083--5a5f--ae06--780cf5acbe42-osd--block--f8ec8ce8--a083--5a5f--ae06--780cf5acbe42'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-INXKyE-D6vr-McjC-Eu2E-DpjP-73lP-XlCfb6', 'scsi-0QEMU_QEMU_HARDDISK_751344b4-b2fa-492b-b080-e9e5b4c67369', 'scsi-SQEMU_QEMU_HARDDISK_751344b4-b2fa-492b-b080-e9e5b4c67369'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:56:37.851557 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11616408-d26b-4882-b347-f5b812b9aa41', 'scsi-SQEMU_QEMU_HARDDISK_11616408-d26b-4882-b347-f5b812b9aa41'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:56:37.851568 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--76cf46ce--80cb--5d18--8384--c0838affc5b6-osd--block--76cf46ce--80cb--5d18--8384--c0838affc5b6', 'dm-uuid-LVM-9E3Qoc7BCPXfuH39FeSqjLVWWsxKeexa5Wzect2iOuEg1v0e8lAoF6zGMmhtApJL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:37.851586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:56:37.851597 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:37.851617 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--465622e3--903d--5505--a41f--76599f0f3897-osd--block--465622e3--903d--5505--a41f--76599f0f3897', 'dm-uuid-LVM-SjqvmYAJNXJDCerOGLeDv7HFSAwwonW6KnAMGsmGSt2GjW35ncgGDsYraOXL2Weh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:37.851627 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:37.851637 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:37.851647 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:37.851656 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:37.851666 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:37.851676 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:37.851691 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:37.851700 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:37.851725 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968', 'scsi-SQEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968-part1', 'scsi-SQEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968-part14', 'scsi-SQEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968-part15', 'scsi-SQEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968-part16', 'scsi-SQEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:56:37.851737 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--76cf46ce--80cb--5d18--8384--c0838affc5b6-osd--block--76cf46ce--80cb--5d18--8384--c0838affc5b6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lQjeTc-DDG1-udOt-seuP-O91I-YAn0-aXReDq', 'scsi-0QEMU_QEMU_HARDDISK_375d9971-3091-4ee7-ad22-0f2ee4316c51', 'scsi-SQEMU_QEMU_HARDDISK_375d9971-3091-4ee7-ad22-0f2ee4316c51'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:56:37.851747 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--465622e3--903d--5505--a41f--76599f0f3897-osd--block--465622e3--903d--5505--a41f--76599f0f3897'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NoBQRK-UZsD-zyBB-c03p-ieDL-salv-zZqPVl', 'scsi-0QEMU_QEMU_HARDDISK_80678cc2-85df-4096-9cf9-3a4ced065123', 'scsi-SQEMU_QEMU_HARDDISK_80678cc2-85df-4096-9cf9-3a4ced065123'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:56:37.851763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e2296ca-3498-48cf-a25a-293306b54174', 'scsi-SQEMU_QEMU_HARDDISK_1e2296ca-3498-48cf-a25a-293306b54174'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:56:37.851782 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:56:37.851792 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:37.851802 | orchestrator | 2025-07-12 13:56:37.851811 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-07-12 13:56:37.851821 | orchestrator | Saturday 12 July 2025 13:54:46 +0000 (0:00:00.586) 0:00:16.427 ********* 2025-07-12 13:56:37.851832 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f86cb3d6--0e78--5b6a--8369--843476bf59dc-osd--block--f86cb3d6--0e78--5b6a--8369--843476bf59dc', 'dm-uuid-LVM-cOowIwi4ngbGyp4J1ONZ0QCO9jALxi4Uq1QblHIlw69fQDfMDIPDIfIejgLGHClo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.851843 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8c07aa4b--79b5--5c8f--bb7a--3f1e0dfe1f2a-osd--block--8c07aa4b--79b5--5c8f--bb7a--3f1e0dfe1f2a', 'dm-uuid-LVM-ryVpm1vZGfdej5YU7k2fce5rcubHgJ30K2EonOshBiJNmKQEiOlu6ex7QlfnrVEr'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.851854 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.851869 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.851879 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.851900 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.851910 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.851920 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.851930 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.851945 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.851969 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70', 'scsi-SQEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70-part1', 'scsi-SQEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70-part14', 'scsi-SQEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70-part15', 'scsi-SQEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70-part16', 'scsi-SQEMU_QEMU_HARDDISK_5d5efef6-d196-4496-8e4e-101ce21afc70-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.851980 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8be3c046--75c4--5df6--b59b--0076bb3a4ccd-osd--block--8be3c046--75c4--5df6--b59b--0076bb3a4ccd', 'dm-uuid-LVM-eNKlWRslYY1LPS1Lsl2a1zcjZPSHC2eEwz1DWsQznfBgygDIVbtNzXLvhZOCsm1i'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.851991 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f86cb3d6--0e78--5b6a--8369--843476bf59dc-osd--block--f86cb3d6--0e78--5b6a--8369--843476bf59dc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JiGeg9-HB3s-KxsR-vAVJ-z7Up-5NSC-twJAJp', 'scsi-0QEMU_QEMU_HARDDISK_cf6824d0-2336-4864-a32f-bffef7606523', 'scsi-SQEMU_QEMU_HARDDISK_cf6824d0-2336-4864-a32f-bffef7606523'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.852007 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f8ec8ce8--a083--5a5f--ae06--780cf5acbe42-osd--block--f8ec8ce8--a083--5a5f--ae06--780cf5acbe42', 'dm-uuid-LVM-iP66jRhrYjcnf87yxq9NTie5JOPBTSKdfp6rB9Iyv3FK2fgoUAfj6YBcRbmD3h7y'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.852027 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8c07aa4b--79b5--5c8f--bb7a--3f1e0dfe1f2a-osd--block--8c07aa4b--79b5--5c8f--bb7a--3f1e0dfe1f2a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vVYPqR-aOTR-bKmD-EWSo-w8X6-HkbQ-6enrQh', 'scsi-0QEMU_QEMU_HARDDISK_bad1a367-9870-4c1b-af18-4999b26662c8', 'scsi-SQEMU_QEMU_HARDDISK_bad1a367-9870-4c1b-af18-4999b26662c8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.852038 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.852048 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec46bf14-c827-46d0-9a8c-19525aeacad6', 'scsi-SQEMU_QEMU_HARDDISK_ec46bf14-c827-46d0-9a8c-19525aeacad6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.852067 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.852077 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.852087 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.852097 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:37.852116 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.852126 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.852136 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.852152 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.852162 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--76cf46ce--80cb--5d18--8384--c0838affc5b6-osd--block--76cf46ce--80cb--5d18--8384--c0838affc5b6', 'dm-uuid-LVM-9E3Qoc7BCPXfuH39FeSqjLVWWsxKeexa5Wzect2iOuEg1v0e8lAoF6zGMmhtApJL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.852172 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.852192 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--465622e3--903d--5505--a41f--76599f0f3897-osd--block--465622e3--903d--5505--a41f--76599f0f3897', 'dm-uuid-LVM-SjqvmYAJNXJDCerOGLeDv7HFSAwwonW6KnAMGsmGSt2GjW35ncgGDsYraOXL2Weh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.852204 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1', 'scsi-SQEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1-part1', 'scsi-SQEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1-part14', 'scsi-SQEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1-part15', 'scsi-SQEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1-part16', 'scsi-SQEMU_QEMU_HARDDISK_7818c38c-8f07-44e8-a255-faa9c2adb8b1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.852221 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.852231 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.852251 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8be3c046--75c4--5df6--b59b--0076bb3a4ccd-osd--block--8be3c046--75c4--5df6--b59b--0076bb3a4ccd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9521cP-6yAn-ReSb-qmNR-WXii-Wgkw-QXC1e3', 'scsi-0QEMU_QEMU_HARDDISK_b303d5ed-b20f-4882-90f3-23adead236a1', 'scsi-SQEMU_QEMU_HARDDISK_b303d5ed-b20f-4882-90f3-23adead236a1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.852262 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.852277 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f8ec8ce8--a083--5a5f--ae06--780cf5acbe42-osd--block--f8ec8ce8--a083--5a5f--ae06--780cf5acbe42'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-INXKyE-D6vr-McjC-Eu2E-DpjP-73lP-XlCfb6', 'scsi-0QEMU_QEMU_HARDDISK_751344b4-b2fa-492b-b080-e9e5b4c67369', 'scsi-SQEMU_QEMU_HARDDISK_751344b4-b2fa-492b-b080-e9e5b4c67369'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.852288 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11616408-d26b-4882-b347-f5b812b9aa41', 'scsi-SQEMU_QEMU_HARDDISK_11616408-d26b-4882-b347-f5b812b9aa41'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.852298 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.852317 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.852327 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.852337 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:37.852347 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.852380 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.852390 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.852413 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968', 'scsi-SQEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968-part1', 'scsi-SQEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968-part14', 'scsi-SQEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968-part15', 'scsi-SQEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968-part16', 'scsi-SQEMU_QEMU_HARDDISK_99f613dd-b7e8-44ac-806c-7adcee7e2968-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.852430 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--76cf46ce--80cb--5d18--8384--c0838affc5b6-osd--block--76cf46ce--80cb--5d18--8384--c0838affc5b6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lQjeTc-DDG1-udOt-seuP-O91I-YAn0-aXReDq', 'scsi-0QEMU_QEMU_HARDDISK_375d9971-3091-4ee7-ad22-0f2ee4316c51', 'scsi-SQEMU_QEMU_HARDDISK_375d9971-3091-4ee7-ad22-0f2ee4316c51'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.852441 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--465622e3--903d--5505--a41f--76599f0f3897-osd--block--465622e3--903d--5505--a41f--76599f0f3897'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NoBQRK-UZsD-zyBB-c03p-ieDL-salv-zZqPVl', 'scsi-0QEMU_QEMU_HARDDISK_80678cc2-85df-4096-9cf9-3a4ced065123', 'scsi-SQEMU_QEMU_HARDDISK_80678cc2-85df-4096-9cf9-3a4ced065123'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.852451 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e2296ca-3498-48cf-a25a-293306b54174', 'scsi-SQEMU_QEMU_HARDDISK_1e2296ca-3498-48cf-a25a-293306b54174'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.852468 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:37.852478 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:37.852487 | orchestrator | 2025-07-12 13:56:37.852497 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-07-12 13:56:37.852506 | orchestrator | Saturday 12 July 2025 13:54:46 +0000 (0:00:00.577) 0:00:17.004 ********* 2025-07-12 13:56:37.852516 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:56:37.852525 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:56:37.852534 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:56:37.852549 | orchestrator | 2025-07-12 13:56:37.852559 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-07-12 13:56:37.852568 | orchestrator | Saturday 12 July 2025 13:54:47 +0000 (0:00:00.735) 0:00:17.740 ********* 2025-07-12 13:56:37.852577 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:56:37.852618 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:56:37.852629 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:56:37.852638 | orchestrator | 2025-07-12 13:56:37.852647 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-12 13:56:37.852657 | orchestrator | Saturday 12 July 2025 13:54:48 +0000 (0:00:00.491) 0:00:18.231 ********* 2025-07-12 13:56:37.852666 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:56:37.852675 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:56:37.852684 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:56:37.852693 | orchestrator | 2025-07-12 13:56:37.852703 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-12 13:56:37.852712 | orchestrator | Saturday 12 July 2025 13:54:48 +0000 (0:00:00.647) 0:00:18.879 ********* 2025-07-12 13:56:37.852721 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:37.852731 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:37.852740 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:37.852749 | orchestrator | 2025-07-12 13:56:37.852758 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-12 13:56:37.852768 | orchestrator | Saturday 12 July 2025 13:54:49 +0000 (0:00:00.277) 0:00:19.156 ********* 2025-07-12 13:56:37.852777 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:37.852786 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:37.852796 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:37.852805 | orchestrator | 2025-07-12 13:56:37.852815 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-12 13:56:37.852824 | orchestrator | Saturday 12 July 2025 13:54:49 +0000 (0:00:00.405) 0:00:19.562 ********* 2025-07-12 13:56:37.852834 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:37.852843 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:37.852852 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:37.852861 | orchestrator | 2025-07-12 13:56:37.852871 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-07-12 13:56:37.852880 | orchestrator | Saturday 12 July 2025 13:54:49 +0000 (0:00:00.489) 0:00:20.052 ********* 2025-07-12 13:56:37.852890 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-07-12 13:56:37.852899 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-07-12 13:56:37.852908 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-07-12 13:56:37.852918 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-07-12 13:56:37.852927 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-07-12 13:56:37.852936 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-07-12 13:56:37.852945 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-07-12 13:56:37.852955 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-07-12 13:56:37.852964 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-07-12 13:56:37.852973 | orchestrator | 2025-07-12 13:56:37.852983 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-07-12 13:56:37.852992 | orchestrator | Saturday 12 July 2025 13:54:50 +0000 (0:00:00.989) 0:00:21.041 ********* 2025-07-12 13:56:37.853001 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-12 13:56:37.853010 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-12 13:56:37.853019 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-12 13:56:37.853029 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:37.853038 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-07-12 13:56:37.853047 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-07-12 13:56:37.853057 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-07-12 13:56:37.853072 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:37.853081 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-07-12 13:56:37.853090 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-07-12 13:56:37.853099 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-07-12 13:56:37.853108 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:37.853118 | orchestrator | 2025-07-12 13:56:37.853127 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-07-12 13:56:37.853136 | orchestrator | Saturday 12 July 2025 13:54:51 +0000 (0:00:00.363) 0:00:21.405 ********* 2025-07-12 13:56:37.853146 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:56:37.853155 | orchestrator | 2025-07-12 13:56:37.853165 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-07-12 13:56:37.853175 | orchestrator | Saturday 12 July 2025 13:54:51 +0000 (0:00:00.682) 0:00:22.087 ********* 2025-07-12 13:56:37.853188 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:37.853198 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:37.853207 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:37.853216 | orchestrator | 2025-07-12 13:56:37.853230 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-07-12 13:56:37.853240 | orchestrator | Saturday 12 July 2025 13:54:52 +0000 (0:00:00.305) 0:00:22.393 ********* 2025-07-12 13:56:37.853250 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:37.853259 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:37.853268 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:37.853277 | orchestrator | 2025-07-12 13:56:37.853287 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-07-12 13:56:37.853296 | orchestrator | Saturday 12 July 2025 13:54:52 +0000 (0:00:00.289) 0:00:22.682 ********* 2025-07-12 13:56:37.853305 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:37.853315 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:37.853324 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:37.853333 | orchestrator | 2025-07-12 13:56:37.853343 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-07-12 13:56:37.853352 | orchestrator | Saturday 12 July 2025 13:54:52 +0000 (0:00:00.315) 0:00:22.998 ********* 2025-07-12 13:56:37.853377 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:56:37.853387 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:56:37.853396 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:56:37.853406 | orchestrator | 2025-07-12 13:56:37.853415 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-07-12 13:56:37.853425 | orchestrator | Saturday 12 July 2025 13:54:53 +0000 (0:00:00.586) 0:00:23.585 ********* 2025-07-12 13:56:37.853434 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 13:56:37.853443 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 13:56:37.853452 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 13:56:37.853462 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:37.853471 | orchestrator | 2025-07-12 13:56:37.853480 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-07-12 13:56:37.853490 | orchestrator | Saturday 12 July 2025 13:54:53 +0000 (0:00:00.366) 0:00:23.951 ********* 2025-07-12 13:56:37.853499 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 13:56:37.853508 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 13:56:37.853518 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 13:56:37.853527 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:37.853536 | orchestrator | 2025-07-12 13:56:37.853546 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-07-12 13:56:37.853555 | orchestrator | Saturday 12 July 2025 13:54:54 +0000 (0:00:00.347) 0:00:24.298 ********* 2025-07-12 13:56:37.853574 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 13:56:37.853583 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 13:56:37.853593 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 13:56:37.853602 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:37.853611 | orchestrator | 2025-07-12 13:56:37.853621 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-07-12 13:56:37.853630 | orchestrator | Saturday 12 July 2025 13:54:54 +0000 (0:00:00.347) 0:00:24.646 ********* 2025-07-12 13:56:37.853640 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:56:37.853649 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:56:37.853659 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:56:37.853668 | orchestrator | 2025-07-12 13:56:37.853678 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-07-12 13:56:37.853687 | orchestrator | Saturday 12 July 2025 13:54:54 +0000 (0:00:00.318) 0:00:24.965 ********* 2025-07-12 13:56:37.853696 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-07-12 13:56:37.853706 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-07-12 13:56:37.853715 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-07-12 13:56:37.853724 | orchestrator | 2025-07-12 13:56:37.853734 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-07-12 13:56:37.853743 | orchestrator | Saturday 12 July 2025 13:54:55 +0000 (0:00:00.499) 0:00:25.464 ********* 2025-07-12 13:56:37.853753 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-12 13:56:37.853762 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 13:56:37.853771 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 13:56:37.853780 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-07-12 13:56:37.853790 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-12 13:56:37.853799 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-12 13:56:37.853809 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-12 13:56:37.853818 | orchestrator | 2025-07-12 13:56:37.853827 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-07-12 13:56:37.853837 | orchestrator | Saturday 12 July 2025 13:54:56 +0000 (0:00:00.966) 0:00:26.431 ********* 2025-07-12 13:56:37.853846 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-12 13:56:37.853855 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 13:56:37.853865 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 13:56:37.853874 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-07-12 13:56:37.853883 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-12 13:56:37.853893 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-12 13:56:37.853906 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-12 13:56:37.853916 | orchestrator | 2025-07-12 13:56:37.853930 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-07-12 13:56:37.853940 | orchestrator | Saturday 12 July 2025 13:54:58 +0000 (0:00:01.985) 0:00:28.417 ********* 2025-07-12 13:56:37.853949 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:37.853959 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:37.853968 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-07-12 13:56:37.853978 | orchestrator | 2025-07-12 13:56:37.853987 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-07-12 13:56:37.853997 | orchestrator | Saturday 12 July 2025 13:54:58 +0000 (0:00:00.387) 0:00:28.805 ********* 2025-07-12 13:56:37.854012 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 13:56:37.854071 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 13:56:37.854082 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 13:56:37.854092 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 13:56:37.854102 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 13:56:37.854112 | orchestrator | 2025-07-12 13:56:37.854121 | orchestrator | TASK [generate keys] *********************************************************** 2025-07-12 13:56:37.854130 | orchestrator | Saturday 12 July 2025 13:55:42 +0000 (0:00:43.531) 0:01:12.336 ********* 2025-07-12 13:56:37.854140 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:37.854149 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:37.854158 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:37.854168 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:37.854177 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:37.854186 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:37.854195 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-07-12 13:56:37.854205 | orchestrator | 2025-07-12 13:56:37.854214 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-07-12 13:56:37.854224 | orchestrator | Saturday 12 July 2025 13:56:06 +0000 (0:00:24.051) 0:01:36.388 ********* 2025-07-12 13:56:37.854233 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:37.854242 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:37.854252 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:37.854261 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:37.854270 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:37.854280 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:37.854289 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 13:56:37.854298 | orchestrator | 2025-07-12 13:56:37.854308 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-07-12 13:56:37.854317 | orchestrator | Saturday 12 July 2025 13:56:18 +0000 (0:00:12.195) 0:01:48.584 ********* 2025-07-12 13:56:37.854326 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:37.854342 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 13:56:37.854351 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 13:56:37.854377 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:37.854387 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 13:56:37.854403 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 13:56:37.854419 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:37.854429 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 13:56:37.854438 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 13:56:37.854447 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:37.854457 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 13:56:37.854466 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 13:56:37.854476 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:37.854485 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 13:56:37.854494 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 13:56:37.854503 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:37.854513 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 13:56:37.854522 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 13:56:37.854532 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-07-12 13:56:37.854541 | orchestrator | 2025-07-12 13:56:37.854551 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:56:37.854561 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-07-12 13:56:37.854572 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-07-12 13:56:37.854581 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-07-12 13:56:37.854591 | orchestrator | 2025-07-12 13:56:37.854600 | orchestrator | 2025-07-12 13:56:37.854610 | orchestrator | 2025-07-12 13:56:37.854619 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:56:37.854629 | orchestrator | Saturday 12 July 2025 13:56:35 +0000 (0:00:17.442) 0:02:06.026 ********* 2025-07-12 13:56:37.854638 | orchestrator | =============================================================================== 2025-07-12 13:56:37.854647 | orchestrator | create openstack pool(s) ----------------------------------------------- 43.53s 2025-07-12 13:56:37.854657 | orchestrator | generate keys ---------------------------------------------------------- 24.05s 2025-07-12 13:56:37.854666 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.44s 2025-07-12 13:56:37.854676 | orchestrator | get keys from monitors ------------------------------------------------- 12.20s 2025-07-12 13:56:37.854685 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.05s 2025-07-12 13:56:37.854694 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.99s 2025-07-12 13:56:37.854704 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.74s 2025-07-12 13:56:37.854713 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.99s 2025-07-12 13:56:37.854722 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.97s 2025-07-12 13:56:37.854737 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.78s 2025-07-12 13:56:37.854747 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.77s 2025-07-12 13:56:37.854756 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.74s 2025-07-12 13:56:37.854766 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.68s 2025-07-12 13:56:37.854775 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.67s 2025-07-12 13:56:37.854784 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.65s 2025-07-12 13:56:37.854794 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.63s 2025-07-12 13:56:37.854803 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.63s 2025-07-12 13:56:37.854812 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.59s 2025-07-12 13:56:37.854822 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.59s 2025-07-12 13:56:37.854831 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.58s 2025-07-12 13:56:37.854840 | orchestrator | 2025-07-12 13:56:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:40.894436 | orchestrator | 2025-07-12 13:56:40 | INFO  | Task bb94a130-a8be-4cf2-ab8e-23c1d451ae35 is in state STARTED 2025-07-12 13:56:40.896151 | orchestrator | 2025-07-12 13:56:40 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:56:40.898199 | orchestrator | 2025-07-12 13:56:40 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:56:40.898397 | orchestrator | 2025-07-12 13:56:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:43.948154 | orchestrator | 2025-07-12 13:56:43 | INFO  | Task bb94a130-a8be-4cf2-ab8e-23c1d451ae35 is in state STARTED 2025-07-12 13:56:43.949696 | orchestrator | 2025-07-12 13:56:43 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:56:43.952349 | orchestrator | 2025-07-12 13:56:43 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:56:43.952461 | orchestrator | 2025-07-12 13:56:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:47.007321 | orchestrator | 2025-07-12 13:56:47 | INFO  | Task bb94a130-a8be-4cf2-ab8e-23c1d451ae35 is in state STARTED 2025-07-12 13:56:47.010225 | orchestrator | 2025-07-12 13:56:47 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:56:47.011764 | orchestrator | 2025-07-12 13:56:47 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:56:47.012674 | orchestrator | 2025-07-12 13:56:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:50.068224 | orchestrator | 2025-07-12 13:56:50 | INFO  | Task bb94a130-a8be-4cf2-ab8e-23c1d451ae35 is in state STARTED 2025-07-12 13:56:50.070415 | orchestrator | 2025-07-12 13:56:50 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:56:50.074855 | orchestrator | 2025-07-12 13:56:50 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:56:50.074905 | orchestrator | 2025-07-12 13:56:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:53.126286 | orchestrator | 2025-07-12 13:56:53 | INFO  | Task bb94a130-a8be-4cf2-ab8e-23c1d451ae35 is in state STARTED 2025-07-12 13:56:53.126618 | orchestrator | 2025-07-12 13:56:53 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:56:53.127326 | orchestrator | 2025-07-12 13:56:53 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:56:53.129603 | orchestrator | 2025-07-12 13:56:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:56.173597 | orchestrator | 2025-07-12 13:56:56 | INFO  | Task bb94a130-a8be-4cf2-ab8e-23c1d451ae35 is in state STARTED 2025-07-12 13:56:56.174420 | orchestrator | 2025-07-12 13:56:56 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:56:56.175679 | orchestrator | 2025-07-12 13:56:56 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:56:56.175894 | orchestrator | 2025-07-12 13:56:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:59.236865 | orchestrator | 2025-07-12 13:56:59 | INFO  | Task bb94a130-a8be-4cf2-ab8e-23c1d451ae35 is in state STARTED 2025-07-12 13:56:59.238955 | orchestrator | 2025-07-12 13:56:59 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:56:59.241978 | orchestrator | 2025-07-12 13:56:59 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:56:59.242057 | orchestrator | 2025-07-12 13:56:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:02.302447 | orchestrator | 2025-07-12 13:57:02 | INFO  | Task bb94a130-a8be-4cf2-ab8e-23c1d451ae35 is in state STARTED 2025-07-12 13:57:02.305050 | orchestrator | 2025-07-12 13:57:02 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:57:02.308130 | orchestrator | 2025-07-12 13:57:02 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state STARTED 2025-07-12 13:57:02.308166 | orchestrator | 2025-07-12 13:57:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:05.361324 | orchestrator | 2025-07-12 13:57:05 | INFO  | Task bb94a130-a8be-4cf2-ab8e-23c1d451ae35 is in state SUCCESS 2025-07-12 13:57:05.363615 | orchestrator | 2025-07-12 13:57:05 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:57:05.366885 | orchestrator | 2025-07-12 13:57:05 | INFO  | Task 79a82a20-112e-4d38-aba6-c6dac2c67783 is in state SUCCESS 2025-07-12 13:57:05.368755 | orchestrator | 2025-07-12 13:57:05.368788 | orchestrator | 2025-07-12 13:57:05.368800 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-07-12 13:57:05.368811 | orchestrator | 2025-07-12 13:57:05.368823 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-07-12 13:57:05.368834 | orchestrator | Saturday 12 July 2025 13:56:40 +0000 (0:00:00.157) 0:00:00.157 ********* 2025-07-12 13:57:05.368845 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-07-12 13:57:05.368857 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-07-12 13:57:05.368885 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-07-12 13:57:05.368897 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-07-12 13:57:05.368907 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-07-12 13:57:05.368918 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-07-12 13:57:05.368929 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-07-12 13:57:05.368939 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-07-12 13:57:05.368951 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-07-12 13:57:05.368962 | orchestrator | 2025-07-12 13:57:05.369288 | orchestrator | TASK [Create share directory] ************************************************** 2025-07-12 13:57:05.369308 | orchestrator | Saturday 12 July 2025 13:56:44 +0000 (0:00:04.055) 0:00:04.213 ********* 2025-07-12 13:57:05.369383 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-12 13:57:05.369396 | orchestrator | 2025-07-12 13:57:05.369407 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-07-12 13:57:05.369418 | orchestrator | Saturday 12 July 2025 13:56:45 +0000 (0:00:00.988) 0:00:05.202 ********* 2025-07-12 13:57:05.369429 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-07-12 13:57:05.369440 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-07-12 13:57:05.369450 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-07-12 13:57:05.369461 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-07-12 13:57:05.369471 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-07-12 13:57:05.369482 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-07-12 13:57:05.369492 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-07-12 13:57:05.369503 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-07-12 13:57:05.369513 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-07-12 13:57:05.369524 | orchestrator | 2025-07-12 13:57:05.369534 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-07-12 13:57:05.369545 | orchestrator | Saturday 12 July 2025 13:56:58 +0000 (0:00:13.045) 0:00:18.247 ********* 2025-07-12 13:57:05.369556 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-07-12 13:57:05.369567 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-07-12 13:57:05.369578 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-07-12 13:57:05.369588 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-07-12 13:57:05.369599 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-07-12 13:57:05.369609 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-07-12 13:57:05.369620 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-07-12 13:57:05.369630 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-07-12 13:57:05.369640 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-07-12 13:57:05.369651 | orchestrator | 2025-07-12 13:57:05.369662 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:57:05.369673 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:57:05.369685 | orchestrator | 2025-07-12 13:57:05.369695 | orchestrator | 2025-07-12 13:57:05.369706 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:57:05.369717 | orchestrator | Saturday 12 July 2025 13:57:04 +0000 (0:00:06.587) 0:00:24.834 ********* 2025-07-12 13:57:05.369727 | orchestrator | =============================================================================== 2025-07-12 13:57:05.369737 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.05s 2025-07-12 13:57:05.369748 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.59s 2025-07-12 13:57:05.369758 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.06s 2025-07-12 13:57:05.369769 | orchestrator | Create share directory -------------------------------------------------- 0.99s 2025-07-12 13:57:05.369779 | orchestrator | 2025-07-12 13:57:05.369790 | orchestrator | 2025-07-12 13:57:05.369801 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 13:57:05.369812 | orchestrator | 2025-07-12 13:57:05.369834 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 13:57:05.369846 | orchestrator | Saturday 12 July 2025 13:55:17 +0000 (0:00:00.258) 0:00:00.258 ********* 2025-07-12 13:57:05.369866 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:57:05.369877 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:57:05.369887 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:57:05.369898 | orchestrator | 2025-07-12 13:57:05.369909 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 13:57:05.369920 | orchestrator | Saturday 12 July 2025 13:55:17 +0000 (0:00:00.290) 0:00:00.549 ********* 2025-07-12 13:57:05.369931 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-07-12 13:57:05.369942 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-07-12 13:57:05.369962 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-07-12 13:57:05.369973 | orchestrator | 2025-07-12 13:57:05.369984 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-07-12 13:57:05.369994 | orchestrator | 2025-07-12 13:57:05.370005 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-12 13:57:05.370065 | orchestrator | Saturday 12 July 2025 13:55:18 +0000 (0:00:00.414) 0:00:00.963 ********* 2025-07-12 13:57:05.370079 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:57:05.370090 | orchestrator | 2025-07-12 13:57:05.370100 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-07-12 13:57:05.370111 | orchestrator | Saturday 12 July 2025 13:55:18 +0000 (0:00:00.511) 0:00:01.474 ********* 2025-07-12 13:57:05.370128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 13:57:05.370167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 13:57:05.370190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 13:57:05.370203 | orchestrator | 2025-07-12 13:57:05.370222 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-07-12 13:57:05.370234 | orchestrator | Saturday 12 July 2025 13:55:19 +0000 (0:00:01.060) 0:00:02.534 ********* 2025-07-12 13:57:05.370245 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:57:05.370256 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:57:05.370266 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:57:05.370277 | orchestrator | 2025-07-12 13:57:05.370287 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-12 13:57:05.370298 | orchestrator | Saturday 12 July 2025 13:55:20 +0000 (0:00:00.455) 0:00:02.990 ********* 2025-07-12 13:57:05.370309 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-07-12 13:57:05.370326 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-07-12 13:57:05.370356 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-07-12 13:57:05.370368 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-07-12 13:57:05.370378 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-07-12 13:57:05.370389 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-07-12 13:57:05.370399 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-07-12 13:57:05.370410 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-07-12 13:57:05.370426 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-07-12 13:57:05.370502 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-07-12 13:57:05.370517 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-07-12 13:57:05.370527 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-07-12 13:57:05.370538 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-07-12 13:57:05.370549 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-07-12 13:57:05.370559 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-07-12 13:57:05.370570 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-07-12 13:57:05.370580 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-07-12 13:57:05.370591 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-07-12 13:57:05.370601 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-07-12 13:57:05.370612 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-07-12 13:57:05.370622 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-07-12 13:57:05.370632 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-07-12 13:57:05.370643 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-07-12 13:57:05.370653 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-07-12 13:57:05.370665 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-07-12 13:57:05.370677 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-07-12 13:57:05.370688 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-07-12 13:57:05.370699 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-07-12 13:57:05.370718 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-07-12 13:57:05.370729 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-07-12 13:57:05.370740 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-07-12 13:57:05.370750 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-07-12 13:57:05.370761 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-07-12 13:57:05.370772 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-07-12 13:57:05.370782 | orchestrator | 2025-07-12 13:57:05.370793 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 13:57:05.370804 | orchestrator | Saturday 12 July 2025 13:55:20 +0000 (0:00:00.756) 0:00:03.746 ********* 2025-07-12 13:57:05.370814 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:57:05.370825 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:57:05.370835 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:57:05.370846 | orchestrator | 2025-07-12 13:57:05.370857 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 13:57:05.370867 | orchestrator | Saturday 12 July 2025 13:55:21 +0000 (0:00:00.314) 0:00:04.061 ********* 2025-07-12 13:57:05.370878 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:05.370888 | orchestrator | 2025-07-12 13:57:05.370906 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 13:57:05.370917 | orchestrator | Saturday 12 July 2025 13:55:21 +0000 (0:00:00.117) 0:00:04.178 ********* 2025-07-12 13:57:05.370928 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:05.370939 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:57:05.370949 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:57:05.370960 | orchestrator | 2025-07-12 13:57:05.370970 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 13:57:05.370981 | orchestrator | Saturday 12 July 2025 13:55:21 +0000 (0:00:00.474) 0:00:04.653 ********* 2025-07-12 13:57:05.370992 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:57:05.371002 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:57:05.371013 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:57:05.371023 | orchestrator | 2025-07-12 13:57:05.371039 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 13:57:05.371051 | orchestrator | Saturday 12 July 2025 13:55:22 +0000 (0:00:00.288) 0:00:04.942 ********* 2025-07-12 13:57:05.371061 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:05.371072 | orchestrator | 2025-07-12 13:57:05.371083 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 13:57:05.371093 | orchestrator | Saturday 12 July 2025 13:55:22 +0000 (0:00:00.127) 0:00:05.070 ********* 2025-07-12 13:57:05.371104 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:05.371114 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:57:05.371125 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:57:05.371135 | orchestrator | 2025-07-12 13:57:05.371146 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 13:57:05.371157 | orchestrator | Saturday 12 July 2025 13:55:22 +0000 (0:00:00.278) 0:00:05.348 ********* 2025-07-12 13:57:05.371167 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:57:05.371178 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:57:05.371188 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:57:05.371205 | orchestrator | 2025-07-12 13:57:05.371216 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 13:57:05.371227 | orchestrator | Saturday 12 July 2025 13:55:22 +0000 (0:00:00.309) 0:00:05.657 ********* 2025-07-12 13:57:05.371237 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:05.371248 | orchestrator | 2025-07-12 13:57:05.371259 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 13:57:05.371270 | orchestrator | Saturday 12 July 2025 13:55:23 +0000 (0:00:00.342) 0:00:06.000 ********* 2025-07-12 13:57:05.371280 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:05.371291 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:57:05.371301 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:57:05.371312 | orchestrator | 2025-07-12 13:57:05.371322 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 13:57:05.371333 | orchestrator | Saturday 12 July 2025 13:55:23 +0000 (0:00:00.277) 0:00:06.277 ********* 2025-07-12 13:57:05.371372 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:57:05.371383 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:57:05.371394 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:57:05.371405 | orchestrator | 2025-07-12 13:57:05.371416 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 13:57:05.371427 | orchestrator | Saturday 12 July 2025 13:55:23 +0000 (0:00:00.336) 0:00:06.614 ********* 2025-07-12 13:57:05.371437 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:05.371448 | orchestrator | 2025-07-12 13:57:05.371459 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 13:57:05.371470 | orchestrator | Saturday 12 July 2025 13:55:23 +0000 (0:00:00.131) 0:00:06.746 ********* 2025-07-12 13:57:05.371480 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:05.371491 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:57:05.371502 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:57:05.371512 | orchestrator | 2025-07-12 13:57:05.371523 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 13:57:05.371533 | orchestrator | Saturday 12 July 2025 13:55:24 +0000 (0:00:00.272) 0:00:07.018 ********* 2025-07-12 13:57:05.371544 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:57:05.371555 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:57:05.371565 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:57:05.371576 | orchestrator | 2025-07-12 13:57:05.371586 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 13:57:05.371597 | orchestrator | Saturday 12 July 2025 13:55:24 +0000 (0:00:00.504) 0:00:07.523 ********* 2025-07-12 13:57:05.371607 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:05.371618 | orchestrator | 2025-07-12 13:57:05.371628 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 13:57:05.371639 | orchestrator | Saturday 12 July 2025 13:55:24 +0000 (0:00:00.114) 0:00:07.638 ********* 2025-07-12 13:57:05.371650 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:05.371660 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:57:05.371671 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:57:05.371681 | orchestrator | 2025-07-12 13:57:05.371692 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 13:57:05.371703 | orchestrator | Saturday 12 July 2025 13:55:25 +0000 (0:00:00.309) 0:00:07.948 ********* 2025-07-12 13:57:05.371713 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:57:05.371724 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:57:05.371734 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:57:05.371745 | orchestrator | 2025-07-12 13:57:05.371756 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 13:57:05.371766 | orchestrator | Saturday 12 July 2025 13:55:25 +0000 (0:00:00.306) 0:00:08.254 ********* 2025-07-12 13:57:05.371777 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:05.371787 | orchestrator | 2025-07-12 13:57:05.371798 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 13:57:05.371809 | orchestrator | Saturday 12 July 2025 13:55:25 +0000 (0:00:00.123) 0:00:08.377 ********* 2025-07-12 13:57:05.371826 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:05.371836 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:57:05.371847 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:57:05.371857 | orchestrator | 2025-07-12 13:57:05.371868 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 13:57:05.371879 | orchestrator | Saturday 12 July 2025 13:55:26 +0000 (0:00:00.472) 0:00:08.849 ********* 2025-07-12 13:57:05.371890 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:57:05.371907 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:57:05.371918 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:57:05.371928 | orchestrator | 2025-07-12 13:57:05.371939 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 13:57:05.371950 | orchestrator | Saturday 12 July 2025 13:55:26 +0000 (0:00:00.311) 0:00:09.161 ********* 2025-07-12 13:57:05.371961 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:05.371971 | orchestrator | 2025-07-12 13:57:05.371982 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 13:57:05.371992 | orchestrator | Saturday 12 July 2025 13:55:26 +0000 (0:00:00.129) 0:00:09.290 ********* 2025-07-12 13:57:05.372003 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:05.372014 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:57:05.372024 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:57:05.372035 | orchestrator | 2025-07-12 13:57:05.372051 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 13:57:05.372062 | orchestrator | Saturday 12 July 2025 13:55:26 +0000 (0:00:00.328) 0:00:09.619 ********* 2025-07-12 13:57:05.372072 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:57:05.372083 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:57:05.372094 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:57:05.372104 | orchestrator | 2025-07-12 13:57:05.372115 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 13:57:05.372126 | orchestrator | Saturday 12 July 2025 13:55:27 +0000 (0:00:00.333) 0:00:09.952 ********* 2025-07-12 13:57:05.372137 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:05.372147 | orchestrator | 2025-07-12 13:57:05.372158 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 13:57:05.372169 | orchestrator | Saturday 12 July 2025 13:55:27 +0000 (0:00:00.141) 0:00:10.094 ********* 2025-07-12 13:57:05.372179 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:05.372190 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:57:05.372200 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:57:05.372211 | orchestrator | 2025-07-12 13:57:05.372222 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 13:57:05.372232 | orchestrator | Saturday 12 July 2025 13:55:27 +0000 (0:00:00.483) 0:00:10.578 ********* 2025-07-12 13:57:05.372243 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:57:05.372254 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:57:05.372264 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:57:05.372275 | orchestrator | 2025-07-12 13:57:05.372286 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 13:57:05.372296 | orchestrator | Saturday 12 July 2025 13:55:28 +0000 (0:00:00.300) 0:00:10.879 ********* 2025-07-12 13:57:05.372307 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:05.372317 | orchestrator | 2025-07-12 13:57:05.372328 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 13:57:05.372386 | orchestrator | Saturday 12 July 2025 13:55:28 +0000 (0:00:00.122) 0:00:11.002 ********* 2025-07-12 13:57:05.372399 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:05.372410 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:57:05.372420 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:57:05.372431 | orchestrator | 2025-07-12 13:57:05.372441 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 13:57:05.372452 | orchestrator | Saturday 12 July 2025 13:55:28 +0000 (0:00:00.294) 0:00:11.297 ********* 2025-07-12 13:57:05.372470 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:57:05.372481 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:57:05.372491 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:57:05.372502 | orchestrator | 2025-07-12 13:57:05.372513 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 13:57:05.372523 | orchestrator | Saturday 12 July 2025 13:55:28 +0000 (0:00:00.494) 0:00:11.791 ********* 2025-07-12 13:57:05.372534 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:05.372544 | orchestrator | 2025-07-12 13:57:05.372555 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 13:57:05.372566 | orchestrator | Saturday 12 July 2025 13:55:29 +0000 (0:00:00.135) 0:00:11.927 ********* 2025-07-12 13:57:05.372576 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:05.372587 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:57:05.372597 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:57:05.372608 | orchestrator | 2025-07-12 13:57:05.372618 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-07-12 13:57:05.372629 | orchestrator | Saturday 12 July 2025 13:55:29 +0000 (0:00:00.306) 0:00:12.233 ********* 2025-07-12 13:57:05.372640 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:57:05.372650 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:57:05.372661 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:57:05.372671 | orchestrator | 2025-07-12 13:57:05.372682 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-07-12 13:57:05.372692 | orchestrator | Saturday 12 July 2025 13:55:30 +0000 (0:00:01.540) 0:00:13.774 ********* 2025-07-12 13:57:05.372703 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-07-12 13:57:05.372714 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-07-12 13:57:05.372724 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-07-12 13:57:05.372735 | orchestrator | 2025-07-12 13:57:05.372746 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-07-12 13:57:05.372756 | orchestrator | Saturday 12 July 2025 13:55:32 +0000 (0:00:01.820) 0:00:15.595 ********* 2025-07-12 13:57:05.372767 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-07-12 13:57:05.372778 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-07-12 13:57:05.372788 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-07-12 13:57:05.372799 | orchestrator | 2025-07-12 13:57:05.372810 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-07-12 13:57:05.372826 | orchestrator | Saturday 12 July 2025 13:55:34 +0000 (0:00:02.199) 0:00:17.794 ********* 2025-07-12 13:57:05.372838 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-07-12 13:57:05.372849 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-07-12 13:57:05.372859 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-07-12 13:57:05.372870 | orchestrator | 2025-07-12 13:57:05.372880 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-07-12 13:57:05.372890 | orchestrator | Saturday 12 July 2025 13:55:36 +0000 (0:00:01.844) 0:00:19.639 ********* 2025-07-12 13:57:05.372899 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:05.372919 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:57:05.372929 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:57:05.372938 | orchestrator | 2025-07-12 13:57:05.372947 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-07-12 13:57:05.372957 | orchestrator | Saturday 12 July 2025 13:55:37 +0000 (0:00:00.282) 0:00:19.922 ********* 2025-07-12 13:57:05.372966 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:05.372982 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:57:05.372991 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:57:05.373000 | orchestrator | 2025-07-12 13:57:05.373010 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-12 13:57:05.373019 | orchestrator | Saturday 12 July 2025 13:55:37 +0000 (0:00:00.310) 0:00:20.233 ********* 2025-07-12 13:57:05.373029 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:57:05.373038 | orchestrator | 2025-07-12 13:57:05.373048 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-07-12 13:57:05.373057 | orchestrator | Saturday 12 July 2025 13:55:38 +0000 (0:00:00.800) 0:00:21.033 ********* 2025-07-12 13:57:05.373068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 13:57:05.373096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 13:57:05.373114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 13:57:05.373125 | orchestrator | 2025-07-12 13:57:05.373134 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-07-12 13:57:05.373144 | orchestrator | Saturday 12 July 2025 13:55:39 +0000 (0:00:01.449) 0:00:22.482 ********* 2025-07-12 13:57:05.373168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 13:57:05.373185 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:57:05.373201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 13:57:05.373212 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:05.373228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 13:57:05.373245 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:57:05.373254 | orchestrator | 2025-07-12 13:57:05.373264 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-07-12 13:57:05.373273 | orchestrator | Saturday 12 July 2025 13:55:40 +0000 (0:00:00.779) 0:00:23.261 ********* 2025-07-12 13:57:05.373296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 13:57:05.373317 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:05.373327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 13:57:05.373351 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:57:05.373374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 13:57:05.373391 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:57:05.373401 | orchestrator | 2025-07-12 13:57:05.373411 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-07-12 13:57:05.373420 | orchestrator | Saturday 12 July 2025 13:55:41 +0000 (0:00:01.087) 0:00:24.348 ********* 2025-07-12 13:57:05.373431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 13:57:05.373455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 13:57:05.373472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 13:57:05.373483 | orchestrator | 2025-07-12 13:57:05.373493 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-12 13:57:05.373503 | orchestrator | Saturday 12 July 2025 13:55:42 +0000 (0:00:01.336) 0:00:25.685 ********* 2025-07-12 13:57:05.373518 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:05.373528 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:57:05.373537 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:57:05.373546 | orchestrator | 2025-07-12 13:57:05.373556 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-12 13:57:05.373566 | orchestrator | Saturday 12 July 2025 13:55:43 +0000 (0:00:00.304) 0:00:25.989 ********* 2025-07-12 13:57:05.373580 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:57:05.373590 | orchestrator | 2025-07-12 13:57:05.373600 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-07-12 13:57:05.373610 | orchestrator | Saturday 12 July 2025 13:55:43 +0000 (0:00:00.704) 0:00:26.694 ********* 2025-07-12 13:57:05.373619 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:57:05.373629 | orchestrator | 2025-07-12 13:57:05.373638 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-07-12 13:57:05.373648 | orchestrator | Saturday 12 July 2025 13:55:46 +0000 (0:00:02.232) 0:00:28.927 ********* 2025-07-12 13:57:05.373657 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:57:05.373667 | orchestrator | 2025-07-12 13:57:05.373676 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-07-12 13:57:05.373693 | orchestrator | Saturday 12 July 2025 13:55:48 +0000 (0:00:02.092) 0:00:31.019 ********* 2025-07-12 13:57:05.373703 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:57:05.373713 | orchestrator | 2025-07-12 13:57:05.373722 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-07-12 13:57:05.373732 | orchestrator | Saturday 12 July 2025 13:56:03 +0000 (0:00:15.361) 0:00:46.380 ********* 2025-07-12 13:57:05.373741 | orchestrator | 2025-07-12 13:57:05.373751 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-07-12 13:57:05.373760 | orchestrator | Saturday 12 July 2025 13:56:03 +0000 (0:00:00.064) 0:00:46.445 ********* 2025-07-12 13:57:05.373770 | orchestrator | 2025-07-12 13:57:05.373779 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-07-12 13:57:05.373789 | orchestrator | Saturday 12 July 2025 13:56:03 +0000 (0:00:00.066) 0:00:46.512 ********* 2025-07-12 13:57:05.373798 | orchestrator | 2025-07-12 13:57:05.373808 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-07-12 13:57:05.373817 | orchestrator | Saturday 12 July 2025 13:56:03 +0000 (0:00:00.066) 0:00:46.579 ********* 2025-07-12 13:57:05.373827 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:57:05.373836 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:57:05.373846 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:57:05.373855 | orchestrator | 2025-07-12 13:57:05.373865 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:57:05.373875 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-07-12 13:57:05.373884 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-07-12 13:57:05.373894 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-07-12 13:57:05.373903 | orchestrator | 2025-07-12 13:57:05.373913 | orchestrator | 2025-07-12 13:57:05.373923 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:57:05.373932 | orchestrator | Saturday 12 July 2025 13:57:02 +0000 (0:00:58.848) 0:01:45.428 ********* 2025-07-12 13:57:05.373942 | orchestrator | =============================================================================== 2025-07-12 13:57:05.373951 | orchestrator | horizon : Restart horizon container ------------------------------------ 58.85s 2025-07-12 13:57:05.373961 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.36s 2025-07-12 13:57:05.373970 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.23s 2025-07-12 13:57:05.373986 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.20s 2025-07-12 13:57:05.373996 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.09s 2025-07-12 13:57:05.374005 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.84s 2025-07-12 13:57:05.374040 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.82s 2025-07-12 13:57:05.374053 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.54s 2025-07-12 13:57:05.374062 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.45s 2025-07-12 13:57:05.374071 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.34s 2025-07-12 13:57:05.374081 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.09s 2025-07-12 13:57:05.374090 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.06s 2025-07-12 13:57:05.374100 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.80s 2025-07-12 13:57:05.374109 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.78s 2025-07-12 13:57:05.374119 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.76s 2025-07-12 13:57:05.374128 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.70s 2025-07-12 13:57:05.374137 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.51s 2025-07-12 13:57:05.374147 | orchestrator | horizon : Update policy file name --------------------------------------- 0.50s 2025-07-12 13:57:05.374156 | orchestrator | horizon : Update policy file name --------------------------------------- 0.49s 2025-07-12 13:57:05.374165 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.48s 2025-07-12 13:57:05.374175 | orchestrator | 2025-07-12 13:57:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:08.412943 | orchestrator | 2025-07-12 13:57:08 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:57:08.415465 | orchestrator | 2025-07-12 13:57:08 | INFO  | Task 2a9b2b11-9d28-448f-a2ca-0b47d0650c8f is in state STARTED 2025-07-12 13:57:08.415534 | orchestrator | 2025-07-12 13:57:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:11.465762 | orchestrator | 2025-07-12 13:57:11 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:57:11.468500 | orchestrator | 2025-07-12 13:57:11 | INFO  | Task 2a9b2b11-9d28-448f-a2ca-0b47d0650c8f is in state STARTED 2025-07-12 13:57:11.468534 | orchestrator | 2025-07-12 13:57:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:14.516857 | orchestrator | 2025-07-12 13:57:14 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:57:14.517802 | orchestrator | 2025-07-12 13:57:14 | INFO  | Task 2a9b2b11-9d28-448f-a2ca-0b47d0650c8f is in state STARTED 2025-07-12 13:57:14.517918 | orchestrator | 2025-07-12 13:57:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:17.563572 | orchestrator | 2025-07-12 13:57:17 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:57:17.565178 | orchestrator | 2025-07-12 13:57:17 | INFO  | Task 2a9b2b11-9d28-448f-a2ca-0b47d0650c8f is in state STARTED 2025-07-12 13:57:17.565209 | orchestrator | 2025-07-12 13:57:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:20.608742 | orchestrator | 2025-07-12 13:57:20 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:57:20.610189 | orchestrator | 2025-07-12 13:57:20 | INFO  | Task 2a9b2b11-9d28-448f-a2ca-0b47d0650c8f is in state STARTED 2025-07-12 13:57:20.610446 | orchestrator | 2025-07-12 13:57:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:23.656874 | orchestrator | 2025-07-12 13:57:23 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:57:23.659876 | orchestrator | 2025-07-12 13:57:23 | INFO  | Task 2a9b2b11-9d28-448f-a2ca-0b47d0650c8f is in state STARTED 2025-07-12 13:57:23.659915 | orchestrator | 2025-07-12 13:57:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:26.707157 | orchestrator | 2025-07-12 13:57:26 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:57:26.708641 | orchestrator | 2025-07-12 13:57:26 | INFO  | Task 2a9b2b11-9d28-448f-a2ca-0b47d0650c8f is in state STARTED 2025-07-12 13:57:26.710119 | orchestrator | 2025-07-12 13:57:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:29.755949 | orchestrator | 2025-07-12 13:57:29 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:57:29.757865 | orchestrator | 2025-07-12 13:57:29 | INFO  | Task 2a9b2b11-9d28-448f-a2ca-0b47d0650c8f is in state STARTED 2025-07-12 13:57:29.757896 | orchestrator | 2025-07-12 13:57:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:32.800184 | orchestrator | 2025-07-12 13:57:32 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:57:32.801143 | orchestrator | 2025-07-12 13:57:32 | INFO  | Task 2a9b2b11-9d28-448f-a2ca-0b47d0650c8f is in state STARTED 2025-07-12 13:57:32.801177 | orchestrator | 2025-07-12 13:57:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:35.844786 | orchestrator | 2025-07-12 13:57:35 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:57:35.846309 | orchestrator | 2025-07-12 13:57:35 | INFO  | Task 2a9b2b11-9d28-448f-a2ca-0b47d0650c8f is in state STARTED 2025-07-12 13:57:35.846388 | orchestrator | 2025-07-12 13:57:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:38.891840 | orchestrator | 2025-07-12 13:57:38 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:57:38.893077 | orchestrator | 2025-07-12 13:57:38 | INFO  | Task 2a9b2b11-9d28-448f-a2ca-0b47d0650c8f is in state STARTED 2025-07-12 13:57:38.893511 | orchestrator | 2025-07-12 13:57:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:41.943534 | orchestrator | 2025-07-12 13:57:41 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:57:41.944815 | orchestrator | 2025-07-12 13:57:41 | INFO  | Task 2a9b2b11-9d28-448f-a2ca-0b47d0650c8f is in state STARTED 2025-07-12 13:57:41.944847 | orchestrator | 2025-07-12 13:57:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:44.999552 | orchestrator | 2025-07-12 13:57:44 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:57:45.001301 | orchestrator | 2025-07-12 13:57:44 | INFO  | Task 2a9b2b11-9d28-448f-a2ca-0b47d0650c8f is in state STARTED 2025-07-12 13:57:45.001441 | orchestrator | 2025-07-12 13:57:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:48.042861 | orchestrator | 2025-07-12 13:57:48 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:57:48.044541 | orchestrator | 2025-07-12 13:57:48 | INFO  | Task 2a9b2b11-9d28-448f-a2ca-0b47d0650c8f is in state STARTED 2025-07-12 13:57:48.044576 | orchestrator | 2025-07-12 13:57:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:51.087944 | orchestrator | 2025-07-12 13:57:51 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:57:51.090080 | orchestrator | 2025-07-12 13:57:51 | INFO  | Task 2a9b2b11-9d28-448f-a2ca-0b47d0650c8f is in state STARTED 2025-07-12 13:57:51.090174 | orchestrator | 2025-07-12 13:57:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:54.135501 | orchestrator | 2025-07-12 13:57:54 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:57:54.136634 | orchestrator | 2025-07-12 13:57:54 | INFO  | Task 2a9b2b11-9d28-448f-a2ca-0b47d0650c8f is in state STARTED 2025-07-12 13:57:54.136665 | orchestrator | 2025-07-12 13:57:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:57.190501 | orchestrator | 2025-07-12 13:57:57 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state STARTED 2025-07-12 13:57:57.192719 | orchestrator | 2025-07-12 13:57:57 | INFO  | Task 2a9b2b11-9d28-448f-a2ca-0b47d0650c8f is in state STARTED 2025-07-12 13:57:57.192753 | orchestrator | 2025-07-12 13:57:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:00.244645 | orchestrator | 2025-07-12 13:58:00.244777 | orchestrator | 2025-07-12 13:58:00.244807 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 13:58:00.244829 | orchestrator | 2025-07-12 13:58:00.244851 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 13:58:00.244872 | orchestrator | Saturday 12 July 2025 13:55:17 +0000 (0:00:00.257) 0:00:00.257 ********* 2025-07-12 13:58:00.244902 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:58:00.244920 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:58:00.244938 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:58:00.245782 | orchestrator | 2025-07-12 13:58:00.245827 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 13:58:00.245845 | orchestrator | Saturday 12 July 2025 13:55:17 +0000 (0:00:00.318) 0:00:00.576 ********* 2025-07-12 13:58:00.245863 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-07-12 13:58:00.245883 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-07-12 13:58:00.245899 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-07-12 13:58:00.245918 | orchestrator | 2025-07-12 13:58:00.245935 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-07-12 13:58:00.245954 | orchestrator | 2025-07-12 13:58:00.245973 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-12 13:58:00.245989 | orchestrator | Saturday 12 July 2025 13:55:18 +0000 (0:00:00.428) 0:00:01.004 ********* 2025-07-12 13:58:00.246006 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:58:00.246100 | orchestrator | 2025-07-12 13:58:00.246121 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-07-12 13:58:00.246140 | orchestrator | Saturday 12 July 2025 13:55:18 +0000 (0:00:00.538) 0:00:01.542 ********* 2025-07-12 13:58:00.246168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:00.246217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:00.246395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:00.246426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 13:58:00.246446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 13:58:00.246463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 13:58:00.246481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:00.246527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:00.246548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:00.246565 | orchestrator | 2025-07-12 13:58:00.246582 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-07-12 13:58:00.246658 | orchestrator | Saturday 12 July 2025 13:55:20 +0000 (0:00:01.711) 0:00:03.254 ********* 2025-07-12 13:58:00.246681 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-07-12 13:58:00.246700 | orchestrator | 2025-07-12 13:58:00.246719 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-07-12 13:58:00.246737 | orchestrator | Saturday 12 July 2025 13:55:21 +0000 (0:00:00.882) 0:00:04.137 ********* 2025-07-12 13:58:00.246754 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:58:00.246772 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:58:00.246790 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:58:00.246808 | orchestrator | 2025-07-12 13:58:00.246828 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-07-12 13:58:00.246846 | orchestrator | Saturday 12 July 2025 13:55:21 +0000 (0:00:00.466) 0:00:04.604 ********* 2025-07-12 13:58:00.246865 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 13:58:00.246884 | orchestrator | 2025-07-12 13:58:00.246903 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-12 13:58:00.246923 | orchestrator | Saturday 12 July 2025 13:55:22 +0000 (0:00:00.656) 0:00:05.260 ********* 2025-07-12 13:58:00.246941 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:58:00.246959 | orchestrator | 2025-07-12 13:58:00.246978 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-07-12 13:58:00.246997 | orchestrator | Saturday 12 July 2025 13:55:23 +0000 (0:00:00.532) 0:00:05.793 ********* 2025-07-12 13:58:00.247019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:00.247055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:00.247104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:00.247127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 13:58:00.247147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 13:58:00.247176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 13:58:00.247194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:00.247211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:00.247238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:00.247258 | orchestrator | 2025-07-12 13:58:00.247278 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-07-12 13:58:00.247297 | orchestrator | Saturday 12 July 2025 13:55:26 +0000 (0:00:03.598) 0:00:09.392 ********* 2025-07-12 13:58:00.247449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 13:58:00.247480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:58:00.247509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 13:58:00.247526 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:58:00.247552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 13:58:00.247570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:58:00.247599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 13:58:00.247615 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:58:00.247633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 13:58:00.247660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:58:00.247678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 13:58:00.247690 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:58:00.247700 | orchestrator | 2025-07-12 13:58:00.247710 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-07-12 13:58:00.247719 | orchestrator | Saturday 12 July 2025 13:55:27 +0000 (0:00:00.545) 0:00:09.937 ********* 2025-07-12 13:58:00.247734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 13:58:00.247753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:58:00.247763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 13:58:00.247779 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:58:00.247790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 13:58:00.247800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:58:00.247815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 13:58:00.247825 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:58:00.247843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 13:58:00.247866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:58:00.247877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 13:58:00.247887 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:58:00.247896 | orchestrator | 2025-07-12 13:58:00.247906 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-07-12 13:58:00.247915 | orchestrator | Saturday 12 July 2025 13:55:27 +0000 (0:00:00.744) 0:00:10.681 ********* 2025-07-12 13:58:00.247926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:00.247940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:00.247957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'grou2025-07-12 13:58:00 | INFO  | Task 7aa86ed7-f566-4724-b014-4d0e7b63c40e is in state SUCCESS 2025-07-12 13:58:00.247969 | orchestrator | p': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:00.247986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 13:58:00.247996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 13:58:00.248006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 13:58:00.248020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:00.248035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:00.248051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:00.248061 | orchestrator | 2025-07-12 13:58:00.248070 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-07-12 13:58:00.248080 | orchestrator | Saturday 12 July 2025 13:55:31 +0000 (0:00:03.616) 0:00:14.298 ********* 2025-07-12 13:58:00.248090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:00.248101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:58:00.248115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:00.248131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:58:00.248149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:00.248159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:58:00.248169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:00.248179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:00.248193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:00.248203 | orchestrator | 2025-07-12 13:58:00.248212 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-07-12 13:58:00.248227 | orchestrator | Saturday 12 July 2025 13:55:36 +0000 (0:00:05.451) 0:00:19.750 ********* 2025-07-12 13:58:00.248237 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:58:00.248247 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:58:00.248256 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:58:00.248265 | orchestrator | 2025-07-12 13:58:00.248275 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-07-12 13:58:00.248284 | orchestrator | Saturday 12 July 2025 13:55:38 +0000 (0:00:01.351) 0:00:21.101 ********* 2025-07-12 13:58:00.248294 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:58:00.248335 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:58:00.248355 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:58:00.248366 | orchestrator | 2025-07-12 13:58:00.248376 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-07-12 13:58:00.248385 | orchestrator | Saturday 12 July 2025 13:55:39 +0000 (0:00:00.693) 0:00:21.795 ********* 2025-07-12 13:58:00.248394 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:58:00.248404 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:58:00.248413 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:58:00.248423 | orchestrator | 2025-07-12 13:58:00.248432 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-07-12 13:58:00.248441 | orchestrator | Saturday 12 July 2025 13:55:39 +0000 (0:00:00.483) 0:00:22.278 ********* 2025-07-12 13:58:00.248451 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:58:00.248460 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:58:00.248470 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:58:00.248479 | orchestrator | 2025-07-12 13:58:00.248488 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-07-12 13:58:00.248498 | orchestrator | Saturday 12 July 2025 13:55:39 +0000 (0:00:00.290) 0:00:22.569 ********* 2025-07-12 13:58:00.248508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:00.248519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:58:00.248535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:00.248558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:58:00.248569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:00.248579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:58:00.248590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:00.248600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:00.248622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:00.248632 | orchestrator | 2025-07-12 13:58:00.248642 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-12 13:58:00.248652 | orchestrator | Saturday 12 July 2025 13:55:42 +0000 (0:00:02.338) 0:00:24.907 ********* 2025-07-12 13:58:00.248661 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:58:00.248674 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:58:00.248691 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:58:00.248707 | orchestrator | 2025-07-12 13:58:00.248722 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-07-12 13:58:00.248738 | orchestrator | Saturday 12 July 2025 13:55:42 +0000 (0:00:00.281) 0:00:25.189 ********* 2025-07-12 13:58:00.248753 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-07-12 13:58:00.248785 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-07-12 13:58:00.248803 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-07-12 13:58:00.248819 | orchestrator | 2025-07-12 13:58:00.248836 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-07-12 13:58:00.248852 | orchestrator | Saturday 12 July 2025 13:55:44 +0000 (0:00:01.965) 0:00:27.154 ********* 2025-07-12 13:58:00.248868 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 13:58:00.248885 | orchestrator | 2025-07-12 13:58:00.248900 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-07-12 13:58:00.248917 | orchestrator | Saturday 12 July 2025 13:55:45 +0000 (0:00:00.948) 0:00:28.103 ********* 2025-07-12 13:58:00.248933 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:58:00.248949 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:58:00.248967 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:58:00.248976 | orchestrator | 2025-07-12 13:58:00.248986 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-07-12 13:58:00.248995 | orchestrator | Saturday 12 July 2025 13:55:45 +0000 (0:00:00.538) 0:00:28.642 ********* 2025-07-12 13:58:00.249006 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-12 13:58:00.249022 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-12 13:58:00.249038 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 13:58:00.249053 | orchestrator | 2025-07-12 13:58:00.249068 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-07-12 13:58:00.249083 | orchestrator | Saturday 12 July 2025 13:55:46 +0000 (0:00:01.035) 0:00:29.677 ********* 2025-07-12 13:58:00.249099 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:58:00.249115 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:58:00.249131 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:58:00.249147 | orchestrator | 2025-07-12 13:58:00.249165 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-07-12 13:58:00.249181 | orchestrator | Saturday 12 July 2025 13:55:47 +0000 (0:00:00.332) 0:00:30.009 ********* 2025-07-12 13:58:00.249196 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-07-12 13:58:00.249206 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-07-12 13:58:00.249224 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-07-12 13:58:00.249234 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-07-12 13:58:00.249243 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-07-12 13:58:00.249253 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-07-12 13:58:00.249262 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-07-12 13:58:00.249272 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-07-12 13:58:00.249281 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-07-12 13:58:00.249290 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-07-12 13:58:00.249300 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-07-12 13:58:00.249327 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-07-12 13:58:00.249337 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-07-12 13:58:00.249346 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-07-12 13:58:00.249355 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-07-12 13:58:00.249365 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-12 13:58:00.249374 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-12 13:58:00.249389 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-12 13:58:00.249399 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-12 13:58:00.249408 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-12 13:58:00.249417 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-12 13:58:00.249427 | orchestrator | 2025-07-12 13:58:00.249436 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-07-12 13:58:00.249446 | orchestrator | Saturday 12 July 2025 13:55:55 +0000 (0:00:08.604) 0:00:38.614 ********* 2025-07-12 13:58:00.249455 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-12 13:58:00.249464 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-12 13:58:00.249473 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-12 13:58:00.249483 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-12 13:58:00.249500 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-12 13:58:00.249509 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-12 13:58:00.249519 | orchestrator | 2025-07-12 13:58:00.249528 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-07-12 13:58:00.249537 | orchestrator | Saturday 12 July 2025 13:55:58 +0000 (0:00:02.566) 0:00:41.181 ********* 2025-07-12 13:58:00.249548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:00.249570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:00.249586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:00.249597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 13:58:00.249614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 13:58:00.249630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 13:58:00.249640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:00.249650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:00.249660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:00.249670 | orchestrator | 2025-07-12 13:58:00.249679 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-12 13:58:00.249693 | orchestrator | Saturday 12 July 2025 13:56:00 +0000 (0:00:02.247) 0:00:43.428 ********* 2025-07-12 13:58:00.249703 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:58:00.249712 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:58:00.249722 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:58:00.249731 | orchestrator | 2025-07-12 13:58:00.249741 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-07-12 13:58:00.249750 | orchestrator | Saturday 12 July 2025 13:56:00 +0000 (0:00:00.316) 0:00:43.745 ********* 2025-07-12 13:58:00.249759 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:58:00.249769 | orchestrator | 2025-07-12 13:58:00.249778 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-07-12 13:58:00.249787 | orchestrator | Saturday 12 July 2025 13:56:03 +0000 (0:00:02.174) 0:00:45.920 ********* 2025-07-12 13:58:00.249797 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:58:00.249806 | orchestrator | 2025-07-12 13:58:00.249815 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-07-12 13:58:00.249825 | orchestrator | Saturday 12 July 2025 13:56:05 +0000 (0:00:02.582) 0:00:48.502 ********* 2025-07-12 13:58:00.249834 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:58:00.249843 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:58:00.249858 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:58:00.249868 | orchestrator | 2025-07-12 13:58:00.249877 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-07-12 13:58:00.249892 | orchestrator | Saturday 12 July 2025 13:56:06 +0000 (0:00:00.847) 0:00:49.350 ********* 2025-07-12 13:58:00.249902 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:58:00.249911 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:58:00.249920 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:58:00.249929 | orchestrator | 2025-07-12 13:58:00.249939 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-07-12 13:58:00.249948 | orchestrator | Saturday 12 July 2025 13:56:06 +0000 (0:00:00.317) 0:00:49.668 ********* 2025-07-12 13:58:00.249958 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:58:00.249967 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:58:00.249976 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:58:00.249986 | orchestrator | 2025-07-12 13:58:00.249995 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-07-12 13:58:00.250004 | orchestrator | Saturday 12 July 2025 13:56:07 +0000 (0:00:00.354) 0:00:50.023 ********* 2025-07-12 13:58:00.250014 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:58:00.250057 | orchestrator | 2025-07-12 13:58:00.250067 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-07-12 13:58:00.250076 | orchestrator | Saturday 12 July 2025 13:56:20 +0000 (0:00:13.003) 0:01:03.027 ********* 2025-07-12 13:58:00.250086 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:58:00.250095 | orchestrator | 2025-07-12 13:58:00.250105 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-07-12 13:58:00.250114 | orchestrator | Saturday 12 July 2025 13:56:30 +0000 (0:00:10.007) 0:01:13.034 ********* 2025-07-12 13:58:00.250124 | orchestrator | 2025-07-12 13:58:00.250133 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-07-12 13:58:00.250143 | orchestrator | Saturday 12 July 2025 13:56:30 +0000 (0:00:00.259) 0:01:13.294 ********* 2025-07-12 13:58:00.250152 | orchestrator | 2025-07-12 13:58:00.250162 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-07-12 13:58:00.250171 | orchestrator | Saturday 12 July 2025 13:56:30 +0000 (0:00:00.064) 0:01:13.358 ********* 2025-07-12 13:58:00.250181 | orchestrator | 2025-07-12 13:58:00.250190 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-07-12 13:58:00.250200 | orchestrator | Saturday 12 July 2025 13:56:30 +0000 (0:00:00.060) 0:01:13.419 ********* 2025-07-12 13:58:00.250209 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:58:00.250219 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:58:00.250229 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:58:00.250238 | orchestrator | 2025-07-12 13:58:00.250254 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-07-12 13:58:00.250271 | orchestrator | Saturday 12 July 2025 13:56:52 +0000 (0:00:22.117) 0:01:35.537 ********* 2025-07-12 13:58:00.250288 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:58:00.250303 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:58:00.250390 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:58:00.250407 | orchestrator | 2025-07-12 13:58:00.250423 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-07-12 13:58:00.250440 | orchestrator | Saturday 12 July 2025 13:57:03 +0000 (0:00:10.274) 0:01:45.811 ********* 2025-07-12 13:58:00.250456 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:58:00.250473 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:58:00.250490 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:58:00.250502 | orchestrator | 2025-07-12 13:58:00.250512 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-12 13:58:00.250521 | orchestrator | Saturday 12 July 2025 13:57:14 +0000 (0:00:11.250) 0:01:57.062 ********* 2025-07-12 13:58:00.250530 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:58:00.250549 | orchestrator | 2025-07-12 13:58:00.250559 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-07-12 13:58:00.250568 | orchestrator | Saturday 12 July 2025 13:57:15 +0000 (0:00:00.766) 0:01:57.828 ********* 2025-07-12 13:58:00.250578 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:58:00.250587 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:58:00.250596 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:58:00.250605 | orchestrator | 2025-07-12 13:58:00.250615 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-07-12 13:58:00.250624 | orchestrator | Saturday 12 July 2025 13:57:15 +0000 (0:00:00.756) 0:01:58.585 ********* 2025-07-12 13:58:00.250633 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:58:00.250643 | orchestrator | 2025-07-12 13:58:00.250652 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-07-12 13:58:00.250661 | orchestrator | Saturday 12 July 2025 13:57:17 +0000 (0:00:01.777) 0:02:00.363 ********* 2025-07-12 13:58:00.250670 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-07-12 13:58:00.250680 | orchestrator | 2025-07-12 13:58:00.250689 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-07-12 13:58:00.250704 | orchestrator | Saturday 12 July 2025 13:57:27 +0000 (0:00:10.294) 0:02:10.657 ********* 2025-07-12 13:58:00.250714 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-07-12 13:58:00.250723 | orchestrator | 2025-07-12 13:58:00.250732 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-07-12 13:58:00.250742 | orchestrator | Saturday 12 July 2025 13:57:48 +0000 (0:00:20.560) 0:02:31.217 ********* 2025-07-12 13:58:00.250751 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-07-12 13:58:00.250761 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-07-12 13:58:00.250770 | orchestrator | 2025-07-12 13:58:00.250779 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-07-12 13:58:00.250788 | orchestrator | Saturday 12 July 2025 13:57:54 +0000 (0:00:06.243) 0:02:37.460 ********* 2025-07-12 13:58:00.250798 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:58:00.250807 | orchestrator | 2025-07-12 13:58:00.250816 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-07-12 13:58:00.250826 | orchestrator | Saturday 12 July 2025 13:57:55 +0000 (0:00:00.320) 0:02:37.782 ********* 2025-07-12 13:58:00.250844 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:58:00.250854 | orchestrator | 2025-07-12 13:58:00.250864 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-07-12 13:58:00.250873 | orchestrator | Saturday 12 July 2025 13:57:55 +0000 (0:00:00.110) 0:02:37.892 ********* 2025-07-12 13:58:00.250882 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:58:00.250891 | orchestrator | 2025-07-12 13:58:00.250899 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-07-12 13:58:00.250906 | orchestrator | Saturday 12 July 2025 13:57:55 +0000 (0:00:00.118) 0:02:38.011 ********* 2025-07-12 13:58:00.250914 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:58:00.250922 | orchestrator | 2025-07-12 13:58:00.250929 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-07-12 13:58:00.250937 | orchestrator | Saturday 12 July 2025 13:57:55 +0000 (0:00:00.331) 0:02:38.342 ********* 2025-07-12 13:58:00.250945 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:58:00.250952 | orchestrator | 2025-07-12 13:58:00.250960 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-12 13:58:00.250967 | orchestrator | Saturday 12 July 2025 13:57:58 +0000 (0:00:03.175) 0:02:41.518 ********* 2025-07-12 13:58:00.250975 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:58:00.250983 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:58:00.250990 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:58:00.250998 | orchestrator | 2025-07-12 13:58:00.251006 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:58:00.251020 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-07-12 13:58:00.251029 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-07-12 13:58:00.251037 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-07-12 13:58:00.251045 | orchestrator | 2025-07-12 13:58:00.251053 | orchestrator | 2025-07-12 13:58:00.251060 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:58:00.251068 | orchestrator | Saturday 12 July 2025 13:57:59 +0000 (0:00:00.630) 0:02:42.149 ********* 2025-07-12 13:58:00.251075 | orchestrator | =============================================================================== 2025-07-12 13:58:00.251083 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 22.12s 2025-07-12 13:58:00.251090 | orchestrator | service-ks-register : keystone | Creating services --------------------- 20.56s 2025-07-12 13:58:00.251098 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.00s 2025-07-12 13:58:00.251106 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.25s 2025-07-12 13:58:00.251113 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.29s 2025-07-12 13:58:00.251121 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.27s 2025-07-12 13:58:00.251128 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.01s 2025-07-12 13:58:00.251136 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.60s 2025-07-12 13:58:00.251143 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.24s 2025-07-12 13:58:00.251151 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.45s 2025-07-12 13:58:00.251159 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.62s 2025-07-12 13:58:00.251166 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.60s 2025-07-12 13:58:00.251174 | orchestrator | keystone : Creating default user role ----------------------------------- 3.18s 2025-07-12 13:58:00.251181 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.58s 2025-07-12 13:58:00.251189 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.57s 2025-07-12 13:58:00.251196 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.34s 2025-07-12 13:58:00.251204 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.25s 2025-07-12 13:58:00.251211 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.17s 2025-07-12 13:58:00.251223 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.97s 2025-07-12 13:58:00.251231 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.78s 2025-07-12 13:58:00.251238 | orchestrator | 2025-07-12 13:58:00 | INFO  | Task 2a9b2b11-9d28-448f-a2ca-0b47d0650c8f is in state STARTED 2025-07-12 13:58:00.251246 | orchestrator | 2025-07-12 13:58:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:03.300067 | orchestrator | 2025-07-12 13:58:03 | INFO  | Task f6eeb4c3-ffaf-42dc-8681-dd1553972fc2 is in state STARTED 2025-07-12 13:58:03.300164 | orchestrator | 2025-07-12 13:58:03 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:58:03.300179 | orchestrator | 2025-07-12 13:58:03 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:58:03.300204 | orchestrator | 2025-07-12 13:58:03 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:58:03.301088 | orchestrator | 2025-07-12 13:58:03 | INFO  | Task 2a9b2b11-9d28-448f-a2ca-0b47d0650c8f is in state STARTED 2025-07-12 13:58:03.301435 | orchestrator | 2025-07-12 13:58:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:06.336533 | orchestrator | 2025-07-12 13:58:06 | INFO  | Task f6eeb4c3-ffaf-42dc-8681-dd1553972fc2 is in state STARTED 2025-07-12 13:58:06.336640 | orchestrator | 2025-07-12 13:58:06 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:58:06.336946 | orchestrator | 2025-07-12 13:58:06 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:58:06.337503 | orchestrator | 2025-07-12 13:58:06 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:58:06.338639 | orchestrator | 2025-07-12 13:58:06 | INFO  | Task 2a9b2b11-9d28-448f-a2ca-0b47d0650c8f is in state SUCCESS 2025-07-12 13:58:06.338805 | orchestrator | 2025-07-12 13:58:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:09.378841 | orchestrator | 2025-07-12 13:58:09 | INFO  | Task f6eeb4c3-ffaf-42dc-8681-dd1553972fc2 is in state STARTED 2025-07-12 13:58:09.380588 | orchestrator | 2025-07-12 13:58:09 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:58:09.382734 | orchestrator | 2025-07-12 13:58:09 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:58:09.383725 | orchestrator | 2025-07-12 13:58:09 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:58:09.387734 | orchestrator | 2025-07-12 13:58:09 | INFO  | Task 94e60d7e-5673-4752-a2f2-54a0b8a42928 is in state STARTED 2025-07-12 13:58:09.387768 | orchestrator | 2025-07-12 13:58:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:12.433756 | orchestrator | 2025-07-12 13:58:12 | INFO  | Task f6eeb4c3-ffaf-42dc-8681-dd1553972fc2 is in state STARTED 2025-07-12 13:58:12.435774 | orchestrator | 2025-07-12 13:58:12 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:58:12.437335 | orchestrator | 2025-07-12 13:58:12 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:58:12.438548 | orchestrator | 2025-07-12 13:58:12 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:58:12.440255 | orchestrator | 2025-07-12 13:58:12 | INFO  | Task 94e60d7e-5673-4752-a2f2-54a0b8a42928 is in state STARTED 2025-07-12 13:58:12.440280 | orchestrator | 2025-07-12 13:58:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:15.482046 | orchestrator | 2025-07-12 13:58:15 | INFO  | Task f6eeb4c3-ffaf-42dc-8681-dd1553972fc2 is in state STARTED 2025-07-12 13:58:15.484075 | orchestrator | 2025-07-12 13:58:15 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:58:15.484915 | orchestrator | 2025-07-12 13:58:15 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:58:15.486494 | orchestrator | 2025-07-12 13:58:15 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:58:15.487950 | orchestrator | 2025-07-12 13:58:15 | INFO  | Task 94e60d7e-5673-4752-a2f2-54a0b8a42928 is in state STARTED 2025-07-12 13:58:15.488090 | orchestrator | 2025-07-12 13:58:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:18.534874 | orchestrator | 2025-07-12 13:58:18 | INFO  | Task f6eeb4c3-ffaf-42dc-8681-dd1553972fc2 is in state STARTED 2025-07-12 13:58:18.536427 | orchestrator | 2025-07-12 13:58:18 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:58:18.537927 | orchestrator | 2025-07-12 13:58:18 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:58:18.540260 | orchestrator | 2025-07-12 13:58:18 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:58:18.542104 | orchestrator | 2025-07-12 13:58:18 | INFO  | Task 94e60d7e-5673-4752-a2f2-54a0b8a42928 is in state STARTED 2025-07-12 13:58:18.542389 | orchestrator | 2025-07-12 13:58:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:21.588255 | orchestrator | 2025-07-12 13:58:21 | INFO  | Task f6eeb4c3-ffaf-42dc-8681-dd1553972fc2 is in state STARTED 2025-07-12 13:58:21.589464 | orchestrator | 2025-07-12 13:58:21 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:58:21.592258 | orchestrator | 2025-07-12 13:58:21 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:58:21.596878 | orchestrator | 2025-07-12 13:58:21 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:58:21.602377 | orchestrator | 2025-07-12 13:58:21 | INFO  | Task 94e60d7e-5673-4752-a2f2-54a0b8a42928 is in state STARTED 2025-07-12 13:58:21.602401 | orchestrator | 2025-07-12 13:58:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:24.654155 | orchestrator | 2025-07-12 13:58:24 | INFO  | Task f6eeb4c3-ffaf-42dc-8681-dd1553972fc2 is in state STARTED 2025-07-12 13:58:24.769374 | orchestrator | 2025-07-12 13:58:24 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:58:24.769461 | orchestrator | 2025-07-12 13:58:24 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:58:24.769480 | orchestrator | 2025-07-12 13:58:24 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:58:24.769492 | orchestrator | 2025-07-12 13:58:24 | INFO  | Task 94e60d7e-5673-4752-a2f2-54a0b8a42928 is in state STARTED 2025-07-12 13:58:24.769505 | orchestrator | 2025-07-12 13:58:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:27.711671 | orchestrator | 2025-07-12 13:58:27 | INFO  | Task f6eeb4c3-ffaf-42dc-8681-dd1553972fc2 is in state STARTED 2025-07-12 13:58:27.713154 | orchestrator | 2025-07-12 13:58:27 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:58:27.715655 | orchestrator | 2025-07-12 13:58:27 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:58:27.717205 | orchestrator | 2025-07-12 13:58:27 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:58:27.719334 | orchestrator | 2025-07-12 13:58:27 | INFO  | Task 94e60d7e-5673-4752-a2f2-54a0b8a42928 is in state STARTED 2025-07-12 13:58:27.719365 | orchestrator | 2025-07-12 13:58:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:30.766145 | orchestrator | 2025-07-12 13:58:30 | INFO  | Task f6eeb4c3-ffaf-42dc-8681-dd1553972fc2 is in state STARTED 2025-07-12 13:58:30.770842 | orchestrator | 2025-07-12 13:58:30 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:58:30.773498 | orchestrator | 2025-07-12 13:58:30 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:58:30.776606 | orchestrator | 2025-07-12 13:58:30 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:58:30.779502 | orchestrator | 2025-07-12 13:58:30 | INFO  | Task 94e60d7e-5673-4752-a2f2-54a0b8a42928 is in state STARTED 2025-07-12 13:58:30.779695 | orchestrator | 2025-07-12 13:58:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:33.825732 | orchestrator | 2025-07-12 13:58:33 | INFO  | Task f6eeb4c3-ffaf-42dc-8681-dd1553972fc2 is in state STARTED 2025-07-12 13:58:33.828192 | orchestrator | 2025-07-12 13:58:33 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:58:33.830808 | orchestrator | 2025-07-12 13:58:33 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:58:33.831832 | orchestrator | 2025-07-12 13:58:33 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:58:33.833605 | orchestrator | 2025-07-12 13:58:33 | INFO  | Task 94e60d7e-5673-4752-a2f2-54a0b8a42928 is in state STARTED 2025-07-12 13:58:33.833630 | orchestrator | 2025-07-12 13:58:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:36.887012 | orchestrator | 2025-07-12 13:58:36 | INFO  | Task f6eeb4c3-ffaf-42dc-8681-dd1553972fc2 is in state STARTED 2025-07-12 13:58:36.889870 | orchestrator | 2025-07-12 13:58:36 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:58:36.892608 | orchestrator | 2025-07-12 13:58:36 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:58:36.895176 | orchestrator | 2025-07-12 13:58:36 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:58:36.898450 | orchestrator | 2025-07-12 13:58:36 | INFO  | Task 94e60d7e-5673-4752-a2f2-54a0b8a42928 is in state STARTED 2025-07-12 13:58:36.898695 | orchestrator | 2025-07-12 13:58:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:39.946964 | orchestrator | 2025-07-12 13:58:39 | INFO  | Task f6eeb4c3-ffaf-42dc-8681-dd1553972fc2 is in state STARTED 2025-07-12 13:58:39.947797 | orchestrator | 2025-07-12 13:58:39 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:58:39.949011 | orchestrator | 2025-07-12 13:58:39 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:58:39.949830 | orchestrator | 2025-07-12 13:58:39 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:58:39.950658 | orchestrator | 2025-07-12 13:58:39 | INFO  | Task 94e60d7e-5673-4752-a2f2-54a0b8a42928 is in state STARTED 2025-07-12 13:58:39.950689 | orchestrator | 2025-07-12 13:58:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:42.999162 | orchestrator | 2025-07-12 13:58:42.999272 | orchestrator | 2025-07-12 13:58:42.999376 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-07-12 13:58:42.999393 | orchestrator | 2025-07-12 13:58:42.999404 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-07-12 13:58:42.999415 | orchestrator | Saturday 12 July 2025 13:57:09 +0000 (0:00:00.242) 0:00:00.243 ********* 2025-07-12 13:58:42.999427 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-07-12 13:58:42.999439 | orchestrator | 2025-07-12 13:58:42.999450 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-07-12 13:58:42.999461 | orchestrator | Saturday 12 July 2025 13:57:09 +0000 (0:00:00.228) 0:00:00.471 ********* 2025-07-12 13:58:42.999472 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-07-12 13:58:42.999482 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-07-12 13:58:42.999493 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-07-12 13:58:42.999504 | orchestrator | 2025-07-12 13:58:42.999515 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-07-12 13:58:42.999526 | orchestrator | Saturday 12 July 2025 13:57:10 +0000 (0:00:01.223) 0:00:01.694 ********* 2025-07-12 13:58:42.999537 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-07-12 13:58:42.999548 | orchestrator | 2025-07-12 13:58:42.999559 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-07-12 13:58:42.999599 | orchestrator | Saturday 12 July 2025 13:57:12 +0000 (0:00:01.159) 0:00:02.853 ********* 2025-07-12 13:58:42.999610 | orchestrator | changed: [testbed-manager] 2025-07-12 13:58:42.999621 | orchestrator | 2025-07-12 13:58:42.999632 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-07-12 13:58:42.999642 | orchestrator | Saturday 12 July 2025 13:57:13 +0000 (0:00:00.979) 0:00:03.833 ********* 2025-07-12 13:58:42.999653 | orchestrator | changed: [testbed-manager] 2025-07-12 13:58:42.999663 | orchestrator | 2025-07-12 13:58:42.999674 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-07-12 13:58:42.999685 | orchestrator | Saturday 12 July 2025 13:57:13 +0000 (0:00:00.878) 0:00:04.711 ********* 2025-07-12 13:58:42.999696 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-07-12 13:58:42.999708 | orchestrator | ok: [testbed-manager] 2025-07-12 13:58:42.999720 | orchestrator | 2025-07-12 13:58:42.999733 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-07-12 13:58:42.999745 | orchestrator | Saturday 12 July 2025 13:57:55 +0000 (0:00:41.564) 0:00:46.276 ********* 2025-07-12 13:58:42.999757 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-07-12 13:58:42.999769 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-07-12 13:58:42.999781 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-07-12 13:58:42.999794 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-07-12 13:58:42.999806 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-07-12 13:58:42.999818 | orchestrator | 2025-07-12 13:58:42.999829 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-07-12 13:58:42.999842 | orchestrator | Saturday 12 July 2025 13:57:59 +0000 (0:00:03.966) 0:00:50.243 ********* 2025-07-12 13:58:42.999854 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-07-12 13:58:42.999865 | orchestrator | 2025-07-12 13:58:42.999876 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-07-12 13:58:42.999887 | orchestrator | Saturday 12 July 2025 13:57:59 +0000 (0:00:00.448) 0:00:50.691 ********* 2025-07-12 13:58:42.999897 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:58:42.999908 | orchestrator | 2025-07-12 13:58:42.999918 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-07-12 13:58:42.999929 | orchestrator | Saturday 12 July 2025 13:57:59 +0000 (0:00:00.124) 0:00:50.816 ********* 2025-07-12 13:58:42.999939 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:58:42.999950 | orchestrator | 2025-07-12 13:58:42.999961 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-07-12 13:58:42.999987 | orchestrator | Saturday 12 July 2025 13:58:00 +0000 (0:00:00.327) 0:00:51.143 ********* 2025-07-12 13:58:42.999999 | orchestrator | changed: [testbed-manager] 2025-07-12 13:58:43.000009 | orchestrator | 2025-07-12 13:58:43.000021 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-07-12 13:58:43.000032 | orchestrator | Saturday 12 July 2025 13:58:02 +0000 (0:00:01.896) 0:00:53.040 ********* 2025-07-12 13:58:43.000043 | orchestrator | changed: [testbed-manager] 2025-07-12 13:58:43.000054 | orchestrator | 2025-07-12 13:58:43.000064 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-07-12 13:58:43.000075 | orchestrator | Saturday 12 July 2025 13:58:03 +0000 (0:00:00.826) 0:00:53.867 ********* 2025-07-12 13:58:43.000086 | orchestrator | changed: [testbed-manager] 2025-07-12 13:58:43.000096 | orchestrator | 2025-07-12 13:58:43.000107 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-07-12 13:58:43.000118 | orchestrator | Saturday 12 July 2025 13:58:03 +0000 (0:00:00.715) 0:00:54.582 ********* 2025-07-12 13:58:43.000129 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-07-12 13:58:43.000139 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-07-12 13:58:43.000150 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-07-12 13:58:43.000160 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-07-12 13:58:43.000179 | orchestrator | 2025-07-12 13:58:43.000190 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:58:43.000201 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:58:43.000213 | orchestrator | 2025-07-12 13:58:43.000223 | orchestrator | 2025-07-12 13:58:43.000253 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:58:43.000264 | orchestrator | Saturday 12 July 2025 13:58:05 +0000 (0:00:01.495) 0:00:56.077 ********* 2025-07-12 13:58:43.000275 | orchestrator | =============================================================================== 2025-07-12 13:58:43.000304 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.56s 2025-07-12 13:58:43.000316 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.97s 2025-07-12 13:58:43.000327 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.90s 2025-07-12 13:58:43.000337 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.50s 2025-07-12 13:58:43.000348 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.22s 2025-07-12 13:58:43.000359 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.16s 2025-07-12 13:58:43.000369 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.98s 2025-07-12 13:58:43.000379 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.88s 2025-07-12 13:58:43.000390 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.83s 2025-07-12 13:58:43.000401 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.72s 2025-07-12 13:58:43.000411 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.45s 2025-07-12 13:58:43.000422 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.33s 2025-07-12 13:58:43.000432 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.23s 2025-07-12 13:58:43.000443 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2025-07-12 13:58:43.000453 | orchestrator | 2025-07-12 13:58:43.000464 | orchestrator | 2025-07-12 13:58:43.000475 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-07-12 13:58:43.000486 | orchestrator | 2025-07-12 13:58:43.000496 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-07-12 13:58:43.000506 | orchestrator | Saturday 12 July 2025 13:58:05 +0000 (0:00:00.222) 0:00:00.222 ********* 2025-07-12 13:58:43.000517 | orchestrator | changed: [localhost] 2025-07-12 13:58:43.000528 | orchestrator | 2025-07-12 13:58:43.000538 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-07-12 13:58:43.000549 | orchestrator | Saturday 12 July 2025 13:58:06 +0000 (0:00:00.953) 0:00:01.176 ********* 2025-07-12 13:58:43.000559 | orchestrator | changed: [localhost] 2025-07-12 13:58:43.000570 | orchestrator | 2025-07-12 13:58:43.000580 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-07-12 13:58:43.000591 | orchestrator | Saturday 12 July 2025 13:58:35 +0000 (0:00:29.681) 0:00:30.857 ********* 2025-07-12 13:58:43.000602 | orchestrator | changed: [localhost] 2025-07-12 13:58:43.000612 | orchestrator | 2025-07-12 13:58:43.000743 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 13:58:43.000761 | orchestrator | 2025-07-12 13:58:43.000772 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 13:58:43.000782 | orchestrator | Saturday 12 July 2025 13:58:40 +0000 (0:00:04.827) 0:00:35.685 ********* 2025-07-12 13:58:43.000793 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:58:43.000804 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:58:43.000814 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:58:43.000825 | orchestrator | 2025-07-12 13:58:43.000836 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 13:58:43.000856 | orchestrator | Saturday 12 July 2025 13:58:41 +0000 (0:00:00.500) 0:00:36.186 ********* 2025-07-12 13:58:43.000867 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-07-12 13:58:43.000877 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-07-12 13:58:43.000888 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-07-12 13:58:43.000899 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-07-12 13:58:43.000909 | orchestrator | 2025-07-12 13:58:43.000920 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-07-12 13:58:43.000931 | orchestrator | skipping: no hosts matched 2025-07-12 13:58:43.000941 | orchestrator | 2025-07-12 13:58:43.000958 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:58:43.000970 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:58:43.000981 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:58:43.000993 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:58:43.001004 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:58:43.001015 | orchestrator | 2025-07-12 13:58:43.001026 | orchestrator | 2025-07-12 13:58:43.001037 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:58:43.001047 | orchestrator | Saturday 12 July 2025 13:58:41 +0000 (0:00:00.454) 0:00:36.640 ********* 2025-07-12 13:58:43.001058 | orchestrator | =============================================================================== 2025-07-12 13:58:43.001069 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 29.68s 2025-07-12 13:58:43.001079 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.83s 2025-07-12 13:58:43.001090 | orchestrator | Ensure the destination directory exists --------------------------------- 0.95s 2025-07-12 13:58:43.001101 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.50s 2025-07-12 13:58:43.001120 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2025-07-12 13:58:43.001132 | orchestrator | 2025-07-12 13:58:42 | INFO  | Task f6eeb4c3-ffaf-42dc-8681-dd1553972fc2 is in state SUCCESS 2025-07-12 13:58:43.001143 | orchestrator | 2025-07-12 13:58:42 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:58:43.001154 | orchestrator | 2025-07-12 13:58:42 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:58:43.003090 | orchestrator | 2025-07-12 13:58:43 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:58:43.003352 | orchestrator | 2025-07-12 13:58:43 | INFO  | Task 94e60d7e-5673-4752-a2f2-54a0b8a42928 is in state STARTED 2025-07-12 13:58:43.003493 | orchestrator | 2025-07-12 13:58:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:46.069347 | orchestrator | 2025-07-12 13:58:46 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:58:46.069830 | orchestrator | 2025-07-12 13:58:46 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:58:46.070979 | orchestrator | 2025-07-12 13:58:46 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:58:46.071974 | orchestrator | 2025-07-12 13:58:46 | INFO  | Task 94e60d7e-5673-4752-a2f2-54a0b8a42928 is in state STARTED 2025-07-12 13:58:46.073659 | orchestrator | 2025-07-12 13:58:46 | INFO  | Task 3224bba3-13ef-4790-b63c-1632e8c4af53 is in state STARTED 2025-07-12 13:58:46.073770 | orchestrator | 2025-07-12 13:58:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:49.115975 | orchestrator | 2025-07-12 13:58:49 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:58:49.116093 | orchestrator | 2025-07-12 13:58:49 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:58:49.116706 | orchestrator | 2025-07-12 13:58:49 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:58:49.117162 | orchestrator | 2025-07-12 13:58:49 | INFO  | Task 94e60d7e-5673-4752-a2f2-54a0b8a42928 is in state STARTED 2025-07-12 13:58:49.117864 | orchestrator | 2025-07-12 13:58:49 | INFO  | Task 3224bba3-13ef-4790-b63c-1632e8c4af53 is in state STARTED 2025-07-12 13:58:49.117885 | orchestrator | 2025-07-12 13:58:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:52.144135 | orchestrator | 2025-07-12 13:58:52 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:58:52.144671 | orchestrator | 2025-07-12 13:58:52 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:58:52.147447 | orchestrator | 2025-07-12 13:58:52 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:58:52.148410 | orchestrator | 2025-07-12 13:58:52 | INFO  | Task 94e60d7e-5673-4752-a2f2-54a0b8a42928 is in state STARTED 2025-07-12 13:58:52.149578 | orchestrator | 2025-07-12 13:58:52 | INFO  | Task 3224bba3-13ef-4790-b63c-1632e8c4af53 is in state STARTED 2025-07-12 13:58:52.149602 | orchestrator | 2025-07-12 13:58:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:55.178548 | orchestrator | 2025-07-12 13:58:55 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:58:55.178677 | orchestrator | 2025-07-12 13:58:55 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:58:55.178705 | orchestrator | 2025-07-12 13:58:55 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:58:55.178740 | orchestrator | 2025-07-12 13:58:55 | INFO  | Task 94e60d7e-5673-4752-a2f2-54a0b8a42928 is in state STARTED 2025-07-12 13:58:55.179363 | orchestrator | 2025-07-12 13:58:55 | INFO  | Task 3224bba3-13ef-4790-b63c-1632e8c4af53 is in state STARTED 2025-07-12 13:58:55.179405 | orchestrator | 2025-07-12 13:58:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:58.218376 | orchestrator | 2025-07-12 13:58:58 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:58:58.218592 | orchestrator | 2025-07-12 13:58:58 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:58:58.219126 | orchestrator | 2025-07-12 13:58:58 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:58:58.219790 | orchestrator | 2025-07-12 13:58:58 | INFO  | Task 94e60d7e-5673-4752-a2f2-54a0b8a42928 is in state STARTED 2025-07-12 13:58:58.221181 | orchestrator | 2025-07-12 13:58:58 | INFO  | Task 3224bba3-13ef-4790-b63c-1632e8c4af53 is in state STARTED 2025-07-12 13:58:58.221209 | orchestrator | 2025-07-12 13:58:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:01.261446 | orchestrator | 2025-07-12 13:59:01 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:59:01.261572 | orchestrator | 2025-07-12 13:59:01 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:59:01.262125 | orchestrator | 2025-07-12 13:59:01 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:59:01.262829 | orchestrator | 2025-07-12 13:59:01 | INFO  | Task 94e60d7e-5673-4752-a2f2-54a0b8a42928 is in state STARTED 2025-07-12 13:59:01.263458 | orchestrator | 2025-07-12 13:59:01 | INFO  | Task 3224bba3-13ef-4790-b63c-1632e8c4af53 is in state STARTED 2025-07-12 13:59:01.263481 | orchestrator | 2025-07-12 13:59:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:04.297244 | orchestrator | 2025-07-12 13:59:04 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:59:04.297522 | orchestrator | 2025-07-12 13:59:04 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:59:04.298560 | orchestrator | 2025-07-12 13:59:04 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:59:04.298840 | orchestrator | 2025-07-12 13:59:04 | INFO  | Task 94e60d7e-5673-4752-a2f2-54a0b8a42928 is in state STARTED 2025-07-12 13:59:04.299763 | orchestrator | 2025-07-12 13:59:04 | INFO  | Task 3224bba3-13ef-4790-b63c-1632e8c4af53 is in state STARTED 2025-07-12 13:59:04.299792 | orchestrator | 2025-07-12 13:59:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:07.335515 | orchestrator | 2025-07-12 13:59:07 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:59:07.335675 | orchestrator | 2025-07-12 13:59:07 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:59:07.336252 | orchestrator | 2025-07-12 13:59:07 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:59:07.336969 | orchestrator | 2025-07-12 13:59:07 | INFO  | Task 94e60d7e-5673-4752-a2f2-54a0b8a42928 is in state STARTED 2025-07-12 13:59:07.337527 | orchestrator | 2025-07-12 13:59:07 | INFO  | Task 3224bba3-13ef-4790-b63c-1632e8c4af53 is in state STARTED 2025-07-12 13:59:07.337559 | orchestrator | 2025-07-12 13:59:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:10.368883 | orchestrator | 2025-07-12 13:59:10 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:59:10.372203 | orchestrator | 2025-07-12 13:59:10 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:59:10.372857 | orchestrator | 2025-07-12 13:59:10 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:59:10.373539 | orchestrator | 2025-07-12 13:59:10 | INFO  | Task 94e60d7e-5673-4752-a2f2-54a0b8a42928 is in state STARTED 2025-07-12 13:59:10.374101 | orchestrator | 2025-07-12 13:59:10 | INFO  | Task 3224bba3-13ef-4790-b63c-1632e8c4af53 is in state STARTED 2025-07-12 13:59:10.374121 | orchestrator | 2025-07-12 13:59:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:13.402972 | orchestrator | 2025-07-12 13:59:13 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:59:13.403095 | orchestrator | 2025-07-12 13:59:13 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:59:13.403654 | orchestrator | 2025-07-12 13:59:13 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:59:13.404259 | orchestrator | 2025-07-12 13:59:13 | INFO  | Task 94e60d7e-5673-4752-a2f2-54a0b8a42928 is in state STARTED 2025-07-12 13:59:13.404780 | orchestrator | 2025-07-12 13:59:13 | INFO  | Task 3224bba3-13ef-4790-b63c-1632e8c4af53 is in state STARTED 2025-07-12 13:59:13.404929 | orchestrator | 2025-07-12 13:59:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:16.444959 | orchestrator | 2025-07-12 13:59:16 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:59:16.445060 | orchestrator | 2025-07-12 13:59:16 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:59:16.445675 | orchestrator | 2025-07-12 13:59:16 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:59:16.446434 | orchestrator | 2025-07-12 13:59:16 | INFO  | Task 94e60d7e-5673-4752-a2f2-54a0b8a42928 is in state STARTED 2025-07-12 13:59:16.446886 | orchestrator | 2025-07-12 13:59:16 | INFO  | Task 3224bba3-13ef-4790-b63c-1632e8c4af53 is in state STARTED 2025-07-12 13:59:16.446909 | orchestrator | 2025-07-12 13:59:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:19.466541 | orchestrator | 2025-07-12 13:59:19 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:59:19.466790 | orchestrator | 2025-07-12 13:59:19 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:59:19.468141 | orchestrator | 2025-07-12 13:59:19 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:59:19.468408 | orchestrator | 2025-07-12 13:59:19 | INFO  | Task 94e60d7e-5673-4752-a2f2-54a0b8a42928 is in state STARTED 2025-07-12 13:59:19.470157 | orchestrator | 2025-07-12 13:59:19 | INFO  | Task 3224bba3-13ef-4790-b63c-1632e8c4af53 is in state STARTED 2025-07-12 13:59:19.470350 | orchestrator | 2025-07-12 13:59:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:22.494537 | orchestrator | 2025-07-12 13:59:22 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:59:22.494763 | orchestrator | 2025-07-12 13:59:22 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:59:22.495371 | orchestrator | 2025-07-12 13:59:22 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:59:22.496177 | orchestrator | 2025-07-12 13:59:22 | INFO  | Task 94e60d7e-5673-4752-a2f2-54a0b8a42928 is in state STARTED 2025-07-12 13:59:22.496772 | orchestrator | 2025-07-12 13:59:22 | INFO  | Task 3224bba3-13ef-4790-b63c-1632e8c4af53 is in state STARTED 2025-07-12 13:59:22.496807 | orchestrator | 2025-07-12 13:59:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:25.531619 | orchestrator | 2025-07-12 13:59:25 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:59:25.531726 | orchestrator | 2025-07-12 13:59:25 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:59:25.532356 | orchestrator | 2025-07-12 13:59:25 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:59:25.533522 | orchestrator | 2025-07-12 13:59:25 | INFO  | Task 94e60d7e-5673-4752-a2f2-54a0b8a42928 is in state STARTED 2025-07-12 13:59:25.534009 | orchestrator | 2025-07-12 13:59:25 | INFO  | Task 3224bba3-13ef-4790-b63c-1632e8c4af53 is in state STARTED 2025-07-12 13:59:25.534078 | orchestrator | 2025-07-12 13:59:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:28.561154 | orchestrator | 2025-07-12 13:59:28 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:59:28.561680 | orchestrator | 2025-07-12 13:59:28 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:59:28.563074 | orchestrator | 2025-07-12 13:59:28 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:59:28.563905 | orchestrator | 2025-07-12 13:59:28 | INFO  | Task 94e60d7e-5673-4752-a2f2-54a0b8a42928 is in state STARTED 2025-07-12 13:59:28.565705 | orchestrator | 2025-07-12 13:59:28 | INFO  | Task 3224bba3-13ef-4790-b63c-1632e8c4af53 is in state STARTED 2025-07-12 13:59:28.565744 | orchestrator | 2025-07-12 13:59:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:31.592483 | orchestrator | 2025-07-12 13:59:31 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:59:31.593131 | orchestrator | 2025-07-12 13:59:31 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:59:31.593772 | orchestrator | 2025-07-12 13:59:31 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:59:31.594505 | orchestrator | 2025-07-12 13:59:31 | INFO  | Task 94e60d7e-5673-4752-a2f2-54a0b8a42928 is in state STARTED 2025-07-12 13:59:31.595234 | orchestrator | 2025-07-12 13:59:31 | INFO  | Task 3224bba3-13ef-4790-b63c-1632e8c4af53 is in state STARTED 2025-07-12 13:59:31.595256 | orchestrator | 2025-07-12 13:59:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:34.639613 | orchestrator | 2025-07-12 13:59:34 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:59:34.639927 | orchestrator | 2025-07-12 13:59:34 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:59:34.642251 | orchestrator | 2025-07-12 13:59:34 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:59:34.642650 | orchestrator | 2025-07-12 13:59:34 | INFO  | Task 94e60d7e-5673-4752-a2f2-54a0b8a42928 is in state SUCCESS 2025-07-12 13:59:34.645116 | orchestrator | 2025-07-12 13:59:34 | INFO  | Task 3224bba3-13ef-4790-b63c-1632e8c4af53 is in state STARTED 2025-07-12 13:59:34.645161 | orchestrator | 2025-07-12 13:59:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:37.677003 | orchestrator | 2025-07-12 13:59:37 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:59:37.678512 | orchestrator | 2025-07-12 13:59:37 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:59:37.678700 | orchestrator | 2025-07-12 13:59:37 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:59:37.678727 | orchestrator | 2025-07-12 13:59:37 | INFO  | Task 3224bba3-13ef-4790-b63c-1632e8c4af53 is in state STARTED 2025-07-12 13:59:37.678739 | orchestrator | 2025-07-12 13:59:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:40.711738 | orchestrator | 2025-07-12 13:59:40 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:59:40.712296 | orchestrator | 2025-07-12 13:59:40 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:59:40.712769 | orchestrator | 2025-07-12 13:59:40 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:59:40.713398 | orchestrator | 2025-07-12 13:59:40 | INFO  | Task 3224bba3-13ef-4790-b63c-1632e8c4af53 is in state STARTED 2025-07-12 13:59:40.714428 | orchestrator | 2025-07-12 13:59:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:43.741602 | orchestrator | 2025-07-12 13:59:43 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:59:43.741712 | orchestrator | 2025-07-12 13:59:43 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:59:43.743056 | orchestrator | 2025-07-12 13:59:43 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:59:43.743475 | orchestrator | 2025-07-12 13:59:43 | INFO  | Task 3224bba3-13ef-4790-b63c-1632e8c4af53 is in state STARTED 2025-07-12 13:59:43.743498 | orchestrator | 2025-07-12 13:59:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:46.779994 | orchestrator | 2025-07-12 13:59:46 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:59:46.780238 | orchestrator | 2025-07-12 13:59:46 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:59:46.782180 | orchestrator | 2025-07-12 13:59:46 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:59:46.782896 | orchestrator | 2025-07-12 13:59:46 | INFO  | Task 3224bba3-13ef-4790-b63c-1632e8c4af53 is in state STARTED 2025-07-12 13:59:46.782920 | orchestrator | 2025-07-12 13:59:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:49.816621 | orchestrator | 2025-07-12 13:59:49 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:59:49.818962 | orchestrator | 2025-07-12 13:59:49 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:59:49.820245 | orchestrator | 2025-07-12 13:59:49 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:59:49.821094 | orchestrator | 2025-07-12 13:59:49 | INFO  | Task 3224bba3-13ef-4790-b63c-1632e8c4af53 is in state STARTED 2025-07-12 13:59:49.821119 | orchestrator | 2025-07-12 13:59:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:52.855124 | orchestrator | 2025-07-12 13:59:52 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:59:52.855239 | orchestrator | 2025-07-12 13:59:52 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:59:52.857241 | orchestrator | 2025-07-12 13:59:52 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:59:52.858068 | orchestrator | 2025-07-12 13:59:52 | INFO  | Task 3224bba3-13ef-4790-b63c-1632e8c4af53 is in state STARTED 2025-07-12 13:59:52.858095 | orchestrator | 2025-07-12 13:59:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:55.892193 | orchestrator | 2025-07-12 13:59:55 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:59:55.892882 | orchestrator | 2025-07-12 13:59:55 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:59:55.893674 | orchestrator | 2025-07-12 13:59:55 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:59:55.895695 | orchestrator | 2025-07-12 13:59:55 | INFO  | Task 3224bba3-13ef-4790-b63c-1632e8c4af53 is in state STARTED 2025-07-12 13:59:55.895721 | orchestrator | 2025-07-12 13:59:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:58.927005 | orchestrator | 2025-07-12 13:59:58 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 13:59:58.927381 | orchestrator | 2025-07-12 13:59:58 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 13:59:58.928141 | orchestrator | 2025-07-12 13:59:58 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 13:59:58.929012 | orchestrator | 2025-07-12 13:59:58 | INFO  | Task 3224bba3-13ef-4790-b63c-1632e8c4af53 is in state STARTED 2025-07-12 13:59:58.929035 | orchestrator | 2025-07-12 13:59:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:01.975650 | orchestrator | 2025-07-12 14:00:01 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 14:00:01.975829 | orchestrator | 2025-07-12 14:00:01 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 14:00:01.976600 | orchestrator | 2025-07-12 14:00:01 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:00:01.977578 | orchestrator | 2025-07-12 14:00:01 | INFO  | Task 3224bba3-13ef-4790-b63c-1632e8c4af53 is in state SUCCESS 2025-07-12 14:00:01.978822 | orchestrator | 2025-07-12 14:00:01.978852 | orchestrator | 2025-07-12 14:00:01.978883 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-07-12 14:00:01.978890 | orchestrator | 2025-07-12 14:00:01.978896 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-07-12 14:00:01.978902 | orchestrator | Saturday 12 July 2025 13:58:08 +0000 (0:00:00.249) 0:00:00.249 ********* 2025-07-12 14:00:01.978909 | orchestrator | changed: [testbed-manager] 2025-07-12 14:00:01.978916 | orchestrator | 2025-07-12 14:00:01.978922 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-07-12 14:00:01.978928 | orchestrator | Saturday 12 July 2025 13:58:10 +0000 (0:00:01.556) 0:00:01.805 ********* 2025-07-12 14:00:01.978934 | orchestrator | changed: [testbed-manager] 2025-07-12 14:00:01.978939 | orchestrator | 2025-07-12 14:00:01.978945 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-07-12 14:00:01.978951 | orchestrator | Saturday 12 July 2025 13:58:11 +0000 (0:00:01.013) 0:00:02.819 ********* 2025-07-12 14:00:01.978958 | orchestrator | changed: [testbed-manager] 2025-07-12 14:00:01.978964 | orchestrator | 2025-07-12 14:00:01.978970 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-07-12 14:00:01.978976 | orchestrator | Saturday 12 July 2025 13:58:12 +0000 (0:00:00.955) 0:00:03.774 ********* 2025-07-12 14:00:01.978982 | orchestrator | changed: [testbed-manager] 2025-07-12 14:00:01.978988 | orchestrator | 2025-07-12 14:00:01.978994 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-07-12 14:00:01.978999 | orchestrator | Saturday 12 July 2025 13:58:13 +0000 (0:00:01.061) 0:00:04.836 ********* 2025-07-12 14:00:01.979006 | orchestrator | changed: [testbed-manager] 2025-07-12 14:00:01.979012 | orchestrator | 2025-07-12 14:00:01.979018 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-07-12 14:00:01.979023 | orchestrator | Saturday 12 July 2025 13:58:14 +0000 (0:00:00.984) 0:00:05.820 ********* 2025-07-12 14:00:01.979029 | orchestrator | changed: [testbed-manager] 2025-07-12 14:00:01.979035 | orchestrator | 2025-07-12 14:00:01.979040 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-07-12 14:00:01.979046 | orchestrator | Saturday 12 July 2025 13:58:15 +0000 (0:00:00.963) 0:00:06.784 ********* 2025-07-12 14:00:01.979052 | orchestrator | changed: [testbed-manager] 2025-07-12 14:00:01.979057 | orchestrator | 2025-07-12 14:00:01.979063 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-07-12 14:00:01.979082 | orchestrator | Saturday 12 July 2025 13:58:16 +0000 (0:00:01.106) 0:00:07.891 ********* 2025-07-12 14:00:01.979088 | orchestrator | changed: [testbed-manager] 2025-07-12 14:00:01.979094 | orchestrator | 2025-07-12 14:00:01.979100 | orchestrator | TASK [Create admin user] ******************************************************* 2025-07-12 14:00:01.979106 | orchestrator | Saturday 12 July 2025 13:58:17 +0000 (0:00:01.107) 0:00:08.998 ********* 2025-07-12 14:00:01.979112 | orchestrator | changed: [testbed-manager] 2025-07-12 14:00:01.979119 | orchestrator | 2025-07-12 14:00:01.979125 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-07-12 14:00:01.979131 | orchestrator | Saturday 12 July 2025 13:59:07 +0000 (0:00:49.768) 0:00:58.766 ********* 2025-07-12 14:00:01.979137 | orchestrator | skipping: [testbed-manager] 2025-07-12 14:00:01.979144 | orchestrator | 2025-07-12 14:00:01.979150 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-07-12 14:00:01.979157 | orchestrator | 2025-07-12 14:00:01.979164 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-07-12 14:00:01.979170 | orchestrator | Saturday 12 July 2025 13:59:07 +0000 (0:00:00.132) 0:00:58.898 ********* 2025-07-12 14:00:01.979176 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:00:01.979182 | orchestrator | 2025-07-12 14:00:01.979188 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-07-12 14:00:01.979195 | orchestrator | 2025-07-12 14:00:01.979202 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-07-12 14:00:01.979209 | orchestrator | Saturday 12 July 2025 13:59:19 +0000 (0:00:11.531) 0:01:10.431 ********* 2025-07-12 14:00:01.979224 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:00:01.979231 | orchestrator | 2025-07-12 14:00:01.979238 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-07-12 14:00:01.979244 | orchestrator | 2025-07-12 14:00:01.979275 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-07-12 14:00:01.979281 | orchestrator | Saturday 12 July 2025 13:59:20 +0000 (0:00:01.249) 0:01:11.680 ********* 2025-07-12 14:00:01.979287 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:00:01.979294 | orchestrator | 2025-07-12 14:00:01.979300 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:00:01.979308 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 14:00:01.979317 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:00:01.979324 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:00:01.979331 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:00:01.979338 | orchestrator | 2025-07-12 14:00:01.979344 | orchestrator | 2025-07-12 14:00:01.979351 | orchestrator | 2025-07-12 14:00:01.979358 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:00:01.979365 | orchestrator | Saturday 12 July 2025 13:59:31 +0000 (0:00:11.162) 0:01:22.843 ********* 2025-07-12 14:00:01.979372 | orchestrator | =============================================================================== 2025-07-12 14:00:01.979379 | orchestrator | Create admin user ------------------------------------------------------ 49.77s 2025-07-12 14:00:01.979386 | orchestrator | Restart ceph manager service ------------------------------------------- 23.94s 2025-07-12 14:00:01.979402 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.56s 2025-07-12 14:00:01.979410 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.11s 2025-07-12 14:00:01.979416 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.11s 2025-07-12 14:00:01.979423 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.06s 2025-07-12 14:00:01.979430 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.01s 2025-07-12 14:00:01.979436 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.98s 2025-07-12 14:00:01.979443 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.96s 2025-07-12 14:00:01.979450 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.96s 2025-07-12 14:00:01.979457 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.13s 2025-07-12 14:00:01.979464 | orchestrator | 2025-07-12 14:00:01.979471 | orchestrator | 2025-07-12 14:00:01.979478 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 14:00:01.979484 | orchestrator | 2025-07-12 14:00:01.979491 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 14:00:01.979497 | orchestrator | Saturday 12 July 2025 13:58:49 +0000 (0:00:00.235) 0:00:00.235 ********* 2025-07-12 14:00:01.979505 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:00:01.979512 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:00:01.979519 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:00:01.979526 | orchestrator | 2025-07-12 14:00:01.979533 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 14:00:01.979540 | orchestrator | Saturday 12 July 2025 13:58:49 +0000 (0:00:00.382) 0:00:00.617 ********* 2025-07-12 14:00:01.979547 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-07-12 14:00:01.979554 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-07-12 14:00:01.979568 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-07-12 14:00:01.979576 | orchestrator | 2025-07-12 14:00:01.979583 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-07-12 14:00:01.979589 | orchestrator | 2025-07-12 14:00:01.979596 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-07-12 14:00:01.979603 | orchestrator | Saturday 12 July 2025 13:58:50 +0000 (0:00:00.617) 0:00:01.235 ********* 2025-07-12 14:00:01.979615 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:00:01.979624 | orchestrator | 2025-07-12 14:00:01.979631 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-07-12 14:00:01.979638 | orchestrator | Saturday 12 July 2025 13:58:50 +0000 (0:00:00.447) 0:00:01.682 ********* 2025-07-12 14:00:01.979645 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-07-12 14:00:01.979652 | orchestrator | 2025-07-12 14:00:01.979659 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-07-12 14:00:01.979667 | orchestrator | Saturday 12 July 2025 13:58:54 +0000 (0:00:03.098) 0:00:04.781 ********* 2025-07-12 14:00:01.979673 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-07-12 14:00:01.979681 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-07-12 14:00:01.979688 | orchestrator | 2025-07-12 14:00:01.979695 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-07-12 14:00:01.979702 | orchestrator | Saturday 12 July 2025 13:58:59 +0000 (0:00:05.765) 0:00:10.546 ********* 2025-07-12 14:00:01.979709 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 14:00:01.979716 | orchestrator | 2025-07-12 14:00:01.979739 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-07-12 14:00:01.979747 | orchestrator | Saturday 12 July 2025 13:59:02 +0000 (0:00:02.685) 0:00:13.231 ********* 2025-07-12 14:00:01.979752 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 14:00:01.979758 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-07-12 14:00:01.979764 | orchestrator | 2025-07-12 14:00:01.979770 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-07-12 14:00:01.979776 | orchestrator | Saturday 12 July 2025 13:59:05 +0000 (0:00:03.309) 0:00:16.541 ********* 2025-07-12 14:00:01.979781 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 14:00:01.979787 | orchestrator | 2025-07-12 14:00:01.979793 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-07-12 14:00:01.979800 | orchestrator | Saturday 12 July 2025 13:59:08 +0000 (0:00:02.796) 0:00:19.338 ********* 2025-07-12 14:00:01.979806 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-07-12 14:00:01.979811 | orchestrator | 2025-07-12 14:00:01.979817 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-07-12 14:00:01.979823 | orchestrator | Saturday 12 July 2025 13:59:12 +0000 (0:00:04.150) 0:00:23.489 ********* 2025-07-12 14:00:01.979829 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:00:01.979835 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:00:01.979841 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:00:01.979847 | orchestrator | 2025-07-12 14:00:01.979852 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-07-12 14:00:01.979858 | orchestrator | Saturday 12 July 2025 13:59:13 +0000 (0:00:00.431) 0:00:23.921 ********* 2025-07-12 14:00:01.979877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:01.979894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:01.979906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:01.979913 | orchestrator | 2025-07-12 14:00:01.979920 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-07-12 14:00:01.979926 | orchestrator | Saturday 12 July 2025 13:59:14 +0000 (0:00:01.289) 0:00:25.210 ********* 2025-07-12 14:00:01.979933 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:00:01.979939 | orchestrator | 2025-07-12 14:00:01.979946 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-07-12 14:00:01.979953 | orchestrator | Saturday 12 July 2025 13:59:14 +0000 (0:00:00.200) 0:00:25.410 ********* 2025-07-12 14:00:01.979959 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:00:01.979966 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:00:01.979973 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:00:01.979979 | orchestrator | 2025-07-12 14:00:01.979986 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-07-12 14:00:01.979992 | orchestrator | Saturday 12 July 2025 13:59:15 +0000 (0:00:01.202) 0:00:26.613 ********* 2025-07-12 14:00:01.979998 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:00:01.980005 | orchestrator | 2025-07-12 14:00:01.980012 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-07-12 14:00:01.980019 | orchestrator | Saturday 12 July 2025 13:59:17 +0000 (0:00:01.493) 0:00:28.107 ********* 2025-07-12 14:00:01.980031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:01.980043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:01.980051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:01.980058 | orchestrator | 2025-07-12 14:00:01.980066 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-07-12 14:00:01.980073 | orchestrator | Saturday 12 July 2025 13:59:19 +0000 (0:00:02.261) 0:00:30.369 ********* 2025-07-12 14:00:01.980109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 14:00:01.980116 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:00:01.980124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 14:00:01.980137 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:00:01.980149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 14:00:01.980157 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:00:01.980164 | orchestrator | 2025-07-12 14:00:01.980170 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-07-12 14:00:01.980177 | orchestrator | Saturday 12 July 2025 13:59:20 +0000 (0:00:00.900) 0:00:31.270 ********* 2025-07-12 14:00:01.980187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 14:00:01.980194 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:00:01.980201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 14:00:01.980208 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:00:01.980215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 14:00:01.980230 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:00:01.980236 | orchestrator | 2025-07-12 14:00:01.980242 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-07-12 14:00:01.980268 | orchestrator | Saturday 12 July 2025 13:59:21 +0000 (0:00:00.698) 0:00:31.968 ********* 2025-07-12 14:00:01.980274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:01.980284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:01.980290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:01.980296 | orchestrator | 2025-07-12 14:00:01.980302 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-07-12 14:00:01.980317 | orchestrator | Saturday 12 July 2025 13:59:22 +0000 (0:00:01.564) 0:00:33.533 ********* 2025-07-12 14:00:01.980323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:01.980335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:01.980341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:01.980347 | orchestrator | 2025-07-12 14:00:01.980355 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-07-12 14:00:01.980362 | orchestrator | Saturday 12 July 2025 13:59:26 +0000 (0:00:03.826) 0:00:37.359 ********* 2025-07-12 14:00:01.980368 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-07-12 14:00:01.980374 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-07-12 14:00:01.980380 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-07-12 14:00:01.980386 | orchestrator | 2025-07-12 14:00:01.980392 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-07-12 14:00:01.980398 | orchestrator | Saturday 12 July 2025 13:59:28 +0000 (0:00:01.853) 0:00:39.212 ********* 2025-07-12 14:00:01.980404 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:00:01.980410 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:00:01.980416 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:00:01.980426 | orchestrator | 2025-07-12 14:00:01.980432 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-07-12 14:00:01.980439 | orchestrator | Saturday 12 July 2025 13:59:30 +0000 (0:00:01.750) 0:00:40.963 ********* 2025-07-12 14:00:01.980445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 14:00:01.980452 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:00:01.980463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 14:00:01.980470 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:00:01.980477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 14:00:01.980483 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:00:01.980489 | orchestrator | 2025-07-12 14:00:01.980496 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-07-12 14:00:01.980502 | orchestrator | Saturday 12 July 2025 13:59:31 +0000 (0:00:01.314) 0:00:42.278 ********* 2025-07-12 14:00:01.980511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:01.980522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:01.980533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:01.980540 | orchestrator | 2025-07-12 14:00:01.980546 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-07-12 14:00:01.980552 | orchestrator | Saturday 12 July 2025 13:59:33 +0000 (0:00:02.246) 0:00:44.524 ********* 2025-07-12 14:00:01.980559 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:00:01.980565 | orchestrator | 2025-07-12 14:00:01.980571 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-07-12 14:00:01.980577 | orchestrator | Saturday 12 July 2025 13:59:36 +0000 (0:00:02.283) 0:00:46.808 ********* 2025-07-12 14:00:01.980584 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:00:01.980590 | orchestrator | 2025-07-12 14:00:01.980595 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-07-12 14:00:01.980601 | orchestrator | Saturday 12 July 2025 13:59:38 +0000 (0:00:02.548) 0:00:49.356 ********* 2025-07-12 14:00:01.980607 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:00:01.980613 | orchestrator | 2025-07-12 14:00:01.980619 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-07-12 14:00:01.980626 | orchestrator | Saturday 12 July 2025 13:59:51 +0000 (0:00:12.537) 0:01:01.893 ********* 2025-07-12 14:00:01.980632 | orchestrator | 2025-07-12 14:00:01.980638 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-07-12 14:00:01.980644 | orchestrator | Saturday 12 July 2025 13:59:51 +0000 (0:00:00.055) 0:01:01.949 ********* 2025-07-12 14:00:01.980650 | orchestrator | 2025-07-12 14:00:01.980656 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-07-12 14:00:01.980662 | orchestrator | Saturday 12 July 2025 13:59:51 +0000 (0:00:00.076) 0:01:02.025 ********* 2025-07-12 14:00:01.980668 | orchestrator | 2025-07-12 14:00:01.980674 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-07-12 14:00:01.980686 | orchestrator | Saturday 12 July 2025 13:59:51 +0000 (0:00:00.124) 0:01:02.150 ********* 2025-07-12 14:00:01.980692 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:00:01.980698 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:00:01.980704 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:00:01.980710 | orchestrator | 2025-07-12 14:00:01.980730 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:00:01.980740 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 14:00:01.980748 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 14:00:01.980755 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 14:00:01.980761 | orchestrator | 2025-07-12 14:00:01.980767 | orchestrator | 2025-07-12 14:00:01.980773 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:00:01.980779 | orchestrator | Saturday 12 July 2025 13:59:59 +0000 (0:00:08.544) 0:01:10.694 ********* 2025-07-12 14:00:01.980785 | orchestrator | =============================================================================== 2025-07-12 14:00:01.980792 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.54s 2025-07-12 14:00:01.980798 | orchestrator | placement : Restart placement-api container ----------------------------- 8.54s 2025-07-12 14:00:01.980804 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 5.77s 2025-07-12 14:00:01.980810 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.15s 2025-07-12 14:00:01.980817 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.83s 2025-07-12 14:00:01.980823 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.31s 2025-07-12 14:00:01.980829 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.10s 2025-07-12 14:00:01.980835 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 2.80s 2025-07-12 14:00:01.980841 | orchestrator | service-ks-register : placement | Creating projects --------------------- 2.69s 2025-07-12 14:00:01.980848 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.55s 2025-07-12 14:00:01.980854 | orchestrator | placement : Creating placement databases -------------------------------- 2.28s 2025-07-12 14:00:01.980860 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.26s 2025-07-12 14:00:01.980866 | orchestrator | placement : Check placement containers ---------------------------------- 2.25s 2025-07-12 14:00:01.980872 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.85s 2025-07-12 14:00:01.980878 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.75s 2025-07-12 14:00:01.980883 | orchestrator | placement : Copying over config.json files for services ----------------- 1.56s 2025-07-12 14:00:01.980889 | orchestrator | placement : include_tasks ----------------------------------------------- 1.49s 2025-07-12 14:00:01.980894 | orchestrator | placement : Copying over existing policy file --------------------------- 1.31s 2025-07-12 14:00:01.980900 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.29s 2025-07-12 14:00:01.980905 | orchestrator | placement : Set placement policy file ----------------------------------- 1.20s 2025-07-12 14:00:01.980912 | orchestrator | 2025-07-12 14:00:01 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:00:01.980918 | orchestrator | 2025-07-12 14:00:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:05.020571 | orchestrator | 2025-07-12 14:00:05 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 14:00:05.024744 | orchestrator | 2025-07-12 14:00:05 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state STARTED 2025-07-12 14:00:05.024995 | orchestrator | 2025-07-12 14:00:05 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:00:05.025766 | orchestrator | 2025-07-12 14:00:05 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:00:05.025791 | orchestrator | 2025-07-12 14:00:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:08.068676 | orchestrator | 2025-07-12 14:00:08 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 14:00:08.069007 | orchestrator | 2025-07-12 14:00:08 | INFO  | Task f02cc805-5e62-401c-851e-7b521928667b is in state STARTED 2025-07-12 14:00:08.070696 | orchestrator | 2025-07-12 14:00:08 | INFO  | Task e94ec2f7-b58f-413f-b6fe-d292b6bce572 is in state SUCCESS 2025-07-12 14:00:08.072332 | orchestrator | 2025-07-12 14:00:08.072422 | orchestrator | 2025-07-12 14:00:08.072437 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 14:00:08.072447 | orchestrator | 2025-07-12 14:00:08.072458 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 14:00:08.072467 | orchestrator | Saturday 12 July 2025 13:58:05 +0000 (0:00:00.352) 0:00:00.352 ********* 2025-07-12 14:00:08.072477 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:00:08.072488 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:00:08.072497 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:00:08.072506 | orchestrator | 2025-07-12 14:00:08.072589 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 14:00:08.072602 | orchestrator | Saturday 12 July 2025 13:58:06 +0000 (0:00:00.367) 0:00:00.720 ********* 2025-07-12 14:00:08.072612 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-07-12 14:00:08.072622 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-07-12 14:00:08.072631 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-07-12 14:00:08.072641 | orchestrator | 2025-07-12 14:00:08.072666 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-07-12 14:00:08.072676 | orchestrator | 2025-07-12 14:00:08.072685 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-07-12 14:00:08.072695 | orchestrator | Saturday 12 July 2025 13:58:06 +0000 (0:00:00.390) 0:00:01.110 ********* 2025-07-12 14:00:08.072704 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:00:08.072715 | orchestrator | 2025-07-12 14:00:08.072724 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-07-12 14:00:08.072734 | orchestrator | Saturday 12 July 2025 13:58:07 +0000 (0:00:00.880) 0:00:01.990 ********* 2025-07-12 14:00:08.072860 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-07-12 14:00:08.072872 | orchestrator | 2025-07-12 14:00:08.072884 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-07-12 14:00:08.073321 | orchestrator | Saturday 12 July 2025 13:58:11 +0000 (0:00:03.806) 0:00:05.797 ********* 2025-07-12 14:00:08.073336 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-07-12 14:00:08.073346 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-07-12 14:00:08.073356 | orchestrator | 2025-07-12 14:00:08.073366 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-07-12 14:00:08.073375 | orchestrator | Saturday 12 July 2025 13:58:18 +0000 (0:00:06.974) 0:00:12.771 ********* 2025-07-12 14:00:08.073385 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-07-12 14:00:08.073395 | orchestrator | 2025-07-12 14:00:08.073405 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-07-12 14:00:08.073414 | orchestrator | Saturday 12 July 2025 13:58:21 +0000 (0:00:03.380) 0:00:16.151 ********* 2025-07-12 14:00:08.073454 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 14:00:08.073464 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-07-12 14:00:08.073474 | orchestrator | 2025-07-12 14:00:08.073484 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-07-12 14:00:08.073494 | orchestrator | Saturday 12 July 2025 13:58:25 +0000 (0:00:03.828) 0:00:19.980 ********* 2025-07-12 14:00:08.073503 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 14:00:08.073513 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-07-12 14:00:08.073522 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-07-12 14:00:08.073532 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-07-12 14:00:08.073541 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-07-12 14:00:08.073551 | orchestrator | 2025-07-12 14:00:08.073561 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-07-12 14:00:08.073570 | orchestrator | Saturday 12 July 2025 13:58:39 +0000 (0:00:13.788) 0:00:33.769 ********* 2025-07-12 14:00:08.073580 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-07-12 14:00:08.073589 | orchestrator | 2025-07-12 14:00:08.073599 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-07-12 14:00:08.073608 | orchestrator | Saturday 12 July 2025 13:58:43 +0000 (0:00:04.150) 0:00:37.919 ********* 2025-07-12 14:00:08.073621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:08.073657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:08.073670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:08.073689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:08.073700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:08.073711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:08.073731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:08.073747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:08.073758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:08.073774 | orchestrator | 2025-07-12 14:00:08.073784 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-07-12 14:00:08.073794 | orchestrator | Saturday 12 July 2025 13:58:46 +0000 (0:00:02.887) 0:00:40.807 ********* 2025-07-12 14:00:08.073804 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-07-12 14:00:08.073814 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-07-12 14:00:08.073823 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-07-12 14:00:08.073833 | orchestrator | 2025-07-12 14:00:08.073842 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-07-12 14:00:08.073852 | orchestrator | Saturday 12 July 2025 13:58:47 +0000 (0:00:01.585) 0:00:42.392 ********* 2025-07-12 14:00:08.073861 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:00:08.073871 | orchestrator | 2025-07-12 14:00:08.073881 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-07-12 14:00:08.073890 | orchestrator | Saturday 12 July 2025 13:58:48 +0000 (0:00:00.217) 0:00:42.610 ********* 2025-07-12 14:00:08.073902 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:00:08.073913 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:00:08.073924 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:00:08.073936 | orchestrator | 2025-07-12 14:00:08.073947 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-07-12 14:00:08.073958 | orchestrator | Saturday 12 July 2025 13:58:48 +0000 (0:00:00.911) 0:00:43.521 ********* 2025-07-12 14:00:08.073970 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:00:08.073981 | orchestrator | 2025-07-12 14:00:08.073992 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-07-12 14:00:08.074004 | orchestrator | Saturday 12 July 2025 13:58:49 +0000 (0:00:00.522) 0:00:44.044 ********* 2025-07-12 14:00:08.074058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:08.074083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:08.074107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:08.074119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:08.074133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:08.074145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:08.074163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:08.074174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:08.074194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:08.074205 | orchestrator | 2025-07-12 14:00:08.074215 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-07-12 14:00:08.074225 | orchestrator | Saturday 12 July 2025 13:58:52 +0000 (0:00:03.215) 0:00:47.260 ********* 2025-07-12 14:00:08.074235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 14:00:08.074269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 14:00:08.074281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:00:08.074291 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:00:08.074310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 14:00:08.074332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 14:00:08.074343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:00:08.074353 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:00:08.074363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 14:00:08.074373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 14:00:08.074383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:00:08.074393 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:00:08.074408 | orchestrator | 2025-07-12 14:00:08.074424 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-07-12 14:00:08.074434 | orchestrator | Saturday 12 July 2025 13:58:54 +0000 (0:00:01.461) 0:00:48.721 ********* 2025-07-12 14:00:08.074454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 14:00:08.074470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 14:00:08.074481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:00:08.074491 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:00:08.074501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 14:00:08.074511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 14:00:08.074533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:00:08.074544 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:00:08.074558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 14:00:08.074568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 14:00:08.074578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:00:08.074588 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:00:08.074598 | orchestrator | 2025-07-12 14:00:08.074608 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-07-12 14:00:08.074618 | orchestrator | Saturday 12 July 2025 13:58:55 +0000 (0:00:00.822) 0:00:49.544 ********* 2025-07-12 14:00:08.074628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:08.074658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:08.074669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:08.074679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:08.074689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:08.074699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:08.074720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:08.074735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:08.074746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:08.074756 | orchestrator | 2025-07-12 14:00:08.074766 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-07-12 14:00:08.074775 | orchestrator | Saturday 12 July 2025 13:58:58 +0000 (0:00:03.275) 0:00:52.819 ********* 2025-07-12 14:00:08.074785 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:00:08.074794 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:00:08.074804 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:00:08.074813 | orchestrator | 2025-07-12 14:00:08.074823 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-07-12 14:00:08.074832 | orchestrator | Saturday 12 July 2025 13:59:00 +0000 (0:00:02.511) 0:00:55.331 ********* 2025-07-12 14:00:08.074842 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 14:00:08.074852 | orchestrator | 2025-07-12 14:00:08.074861 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-07-12 14:00:08.074871 | orchestrator | Saturday 12 July 2025 13:59:01 +0000 (0:00:00.852) 0:00:56.184 ********* 2025-07-12 14:00:08.074881 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:00:08.074890 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:00:08.074900 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:00:08.074909 | orchestrator | 2025-07-12 14:00:08.074918 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-07-12 14:00:08.074928 | orchestrator | Saturday 12 July 2025 13:59:02 +0000 (0:00:00.885) 0:00:57.069 ********* 2025-07-12 14:00:08.074937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:08.074958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:08.074973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:08.074984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:08.074994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:08.075004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:08.075020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:08.075036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:08.075050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:08.075061 | orchestrator | 2025-07-12 14:00:08.075071 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-07-12 14:00:08.075080 | orchestrator | Saturday 12 July 2025 13:59:11 +0000 (0:00:08.777) 0:01:05.847 ********* 2025-07-12 14:00:08.075091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 14:00:08.075101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 14:00:08.075116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:00:08.075126 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:00:08.075142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 14:00:08.075153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 14:00:08.075167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:00:08.075177 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:00:08.075188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 14:00:08.075203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 14:00:08.075213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:00:08.075223 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:00:08.075233 | orchestrator | 2025-07-12 14:00:08.075259 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-07-12 14:00:08.075278 | orchestrator | Saturday 12 July 2025 13:59:13 +0000 (0:00:01.866) 0:01:07.714 ********* 2025-07-12 14:00:08.075296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:08.075311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:08.075322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:08.075342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:08.075353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:08.075369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:08.075384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:08.075395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:08.075404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:08.075420 | orchestrator | 2025-07-12 14:00:08.075430 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-07-12 14:00:08.075440 | orchestrator | Saturday 12 July 2025 13:59:16 +0000 (0:00:02.975) 0:01:10.689 ********* 2025-07-12 14:00:08.075449 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:00:08.075459 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:00:08.075468 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:00:08.075478 | orchestrator | 2025-07-12 14:00:08.075488 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-07-12 14:00:08.075497 | orchestrator | Saturday 12 July 2025 13:59:17 +0000 (0:00:00.850) 0:01:11.540 ********* 2025-07-12 14:00:08.075507 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:00:08.075516 | orchestrator | 2025-07-12 14:00:08.075526 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-07-12 14:00:08.075535 | orchestrator | Saturday 12 July 2025 13:59:19 +0000 (0:00:02.304) 0:01:13.845 ********* 2025-07-12 14:00:08.075545 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:00:08.075554 | orchestrator | 2025-07-12 14:00:08.075564 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-07-12 14:00:08.075574 | orchestrator | Saturday 12 July 2025 13:59:21 +0000 (0:00:02.320) 0:01:16.165 ********* 2025-07-12 14:00:08.075583 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:00:08.075593 | orchestrator | 2025-07-12 14:00:08.075602 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-07-12 14:00:08.075612 | orchestrator | Saturday 12 July 2025 13:59:32 +0000 (0:00:11.209) 0:01:27.375 ********* 2025-07-12 14:00:08.075621 | orchestrator | 2025-07-12 14:00:08.075630 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-07-12 14:00:08.075640 | orchestrator | Saturday 12 July 2025 13:59:33 +0000 (0:00:00.164) 0:01:27.539 ********* 2025-07-12 14:00:08.075649 | orchestrator | 2025-07-12 14:00:08.075659 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-07-12 14:00:08.075668 | orchestrator | Saturday 12 July 2025 13:59:33 +0000 (0:00:00.126) 0:01:27.666 ********* 2025-07-12 14:00:08.075678 | orchestrator | 2025-07-12 14:00:08.075687 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-07-12 14:00:08.075696 | orchestrator | Saturday 12 July 2025 13:59:33 +0000 (0:00:00.072) 0:01:27.739 ********* 2025-07-12 14:00:08.075706 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:00:08.075715 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:00:08.075725 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:00:08.075734 | orchestrator | 2025-07-12 14:00:08.075743 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-07-12 14:00:08.075753 | orchestrator | Saturday 12 July 2025 13:59:43 +0000 (0:00:10.572) 0:01:38.311 ********* 2025-07-12 14:00:08.075762 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:00:08.075772 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:00:08.075786 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:00:08.075796 | orchestrator | 2025-07-12 14:00:08.075806 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-07-12 14:00:08.075816 | orchestrator | Saturday 12 July 2025 13:59:54 +0000 (0:00:10.978) 0:01:49.290 ********* 2025-07-12 14:00:08.075825 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:00:08.075834 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:00:08.075844 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:00:08.075853 | orchestrator | 2025-07-12 14:00:08.075872 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:00:08.075882 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-12 14:00:08.075892 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 14:00:08.075906 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 14:00:08.075916 | orchestrator | 2025-07-12 14:00:08.075926 | orchestrator | 2025-07-12 14:00:08.075935 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:00:08.075945 | orchestrator | Saturday 12 July 2025 14:00:05 +0000 (0:00:10.955) 0:02:00.246 ********* 2025-07-12 14:00:08.075954 | orchestrator | =============================================================================== 2025-07-12 14:00:08.075964 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 13.79s 2025-07-12 14:00:08.075973 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.21s 2025-07-12 14:00:08.075983 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.98s 2025-07-12 14:00:08.075992 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.96s 2025-07-12 14:00:08.076001 | orchestrator | barbican : Restart barbican-api container ------------------------------ 10.57s 2025-07-12 14:00:08.076011 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 8.78s 2025-07-12 14:00:08.076020 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.97s 2025-07-12 14:00:08.076029 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.15s 2025-07-12 14:00:08.076039 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.83s 2025-07-12 14:00:08.076048 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.81s 2025-07-12 14:00:08.076057 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.38s 2025-07-12 14:00:08.076067 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.28s 2025-07-12 14:00:08.076076 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.22s 2025-07-12 14:00:08.076085 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.98s 2025-07-12 14:00:08.076095 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.89s 2025-07-12 14:00:08.076104 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.51s 2025-07-12 14:00:08.076114 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.32s 2025-07-12 14:00:08.076124 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.30s 2025-07-12 14:00:08.076133 | orchestrator | barbican : Copying over existing policy file ---------------------------- 1.87s 2025-07-12 14:00:08.076142 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.59s 2025-07-12 14:00:08.076152 | orchestrator | 2025-07-12 14:00:08 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:00:08.076162 | orchestrator | 2025-07-12 14:00:08 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:00:08.076172 | orchestrator | 2025-07-12 14:00:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:11.111031 | orchestrator | 2025-07-12 14:00:11 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 14:00:11.111608 | orchestrator | 2025-07-12 14:00:11 | INFO  | Task f02cc805-5e62-401c-851e-7b521928667b is in state STARTED 2025-07-12 14:00:11.112561 | orchestrator | 2025-07-12 14:00:11 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:00:11.113529 | orchestrator | 2025-07-12 14:00:11 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:00:11.113554 | orchestrator | 2025-07-12 14:00:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:14.156562 | orchestrator | 2025-07-12 14:00:14 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 14:00:14.157104 | orchestrator | 2025-07-12 14:00:14 | INFO  | Task f02cc805-5e62-401c-851e-7b521928667b is in state STARTED 2025-07-12 14:00:14.157968 | orchestrator | 2025-07-12 14:00:14 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:00:14.159613 | orchestrator | 2025-07-12 14:00:14 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:00:14.159646 | orchestrator | 2025-07-12 14:00:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:17.207955 | orchestrator | 2025-07-12 14:00:17 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 14:00:17.208057 | orchestrator | 2025-07-12 14:00:17 | INFO  | Task f02cc805-5e62-401c-851e-7b521928667b is in state SUCCESS 2025-07-12 14:00:17.208373 | orchestrator | 2025-07-12 14:00:17 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:00:17.209022 | orchestrator | 2025-07-12 14:00:17 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:00:17.209662 | orchestrator | 2025-07-12 14:00:17 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:00:17.209877 | orchestrator | 2025-07-12 14:00:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:20.238398 | orchestrator | 2025-07-12 14:00:20 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 14:00:20.238881 | orchestrator | 2025-07-12 14:00:20 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:00:20.239477 | orchestrator | 2025-07-12 14:00:20 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:00:20.240314 | orchestrator | 2025-07-12 14:00:20 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:00:20.241745 | orchestrator | 2025-07-12 14:00:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:23.268083 | orchestrator | 2025-07-12 14:00:23 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 14:00:23.269068 | orchestrator | 2025-07-12 14:00:23 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:00:23.269961 | orchestrator | 2025-07-12 14:00:23 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:00:23.270737 | orchestrator | 2025-07-12 14:00:23 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:00:23.270762 | orchestrator | 2025-07-12 14:00:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:26.311878 | orchestrator | 2025-07-12 14:00:26 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 14:00:26.312091 | orchestrator | 2025-07-12 14:00:26 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:00:26.312980 | orchestrator | 2025-07-12 14:00:26 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:00:26.313756 | orchestrator | 2025-07-12 14:00:26 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:00:26.313780 | orchestrator | 2025-07-12 14:00:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:29.339649 | orchestrator | 2025-07-12 14:00:29 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 14:00:29.340164 | orchestrator | 2025-07-12 14:00:29 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:00:29.340948 | orchestrator | 2025-07-12 14:00:29 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:00:29.341733 | orchestrator | 2025-07-12 14:00:29 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:00:29.341902 | orchestrator | 2025-07-12 14:00:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:32.389791 | orchestrator | 2025-07-12 14:00:32 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 14:00:32.395268 | orchestrator | 2025-07-12 14:00:32 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:00:32.395988 | orchestrator | 2025-07-12 14:00:32 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:00:32.396833 | orchestrator | 2025-07-12 14:00:32 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:00:32.396858 | orchestrator | 2025-07-12 14:00:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:35.451833 | orchestrator | 2025-07-12 14:00:35 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 14:00:35.451952 | orchestrator | 2025-07-12 14:00:35 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:00:35.452553 | orchestrator | 2025-07-12 14:00:35 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:00:35.453110 | orchestrator | 2025-07-12 14:00:35 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:00:35.453203 | orchestrator | 2025-07-12 14:00:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:38.480614 | orchestrator | 2025-07-12 14:00:38 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 14:00:38.480892 | orchestrator | 2025-07-12 14:00:38 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:00:38.480915 | orchestrator | 2025-07-12 14:00:38 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:00:38.480942 | orchestrator | 2025-07-12 14:00:38 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:00:38.480976 | orchestrator | 2025-07-12 14:00:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:41.517860 | orchestrator | 2025-07-12 14:00:41 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 14:00:41.522568 | orchestrator | 2025-07-12 14:00:41 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:00:41.523392 | orchestrator | 2025-07-12 14:00:41 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:00:41.524311 | orchestrator | 2025-07-12 14:00:41 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:00:41.524337 | orchestrator | 2025-07-12 14:00:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:44.573312 | orchestrator | 2025-07-12 14:00:44 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 14:00:44.573915 | orchestrator | 2025-07-12 14:00:44 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:00:44.574818 | orchestrator | 2025-07-12 14:00:44 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:00:44.576464 | orchestrator | 2025-07-12 14:00:44 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:00:44.576513 | orchestrator | 2025-07-12 14:00:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:47.610998 | orchestrator | 2025-07-12 14:00:47 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 14:00:47.611216 | orchestrator | 2025-07-12 14:00:47 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:00:47.613182 | orchestrator | 2025-07-12 14:00:47 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:00:47.614005 | orchestrator | 2025-07-12 14:00:47 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:00:47.614084 | orchestrator | 2025-07-12 14:00:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:50.644749 | orchestrator | 2025-07-12 14:00:50 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 14:00:50.645589 | orchestrator | 2025-07-12 14:00:50 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:00:50.646135 | orchestrator | 2025-07-12 14:00:50 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:00:50.647690 | orchestrator | 2025-07-12 14:00:50 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:00:50.647744 | orchestrator | 2025-07-12 14:00:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:53.683971 | orchestrator | 2025-07-12 14:00:53 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 14:00:53.685598 | orchestrator | 2025-07-12 14:00:53 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:00:53.685646 | orchestrator | 2025-07-12 14:00:53 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:00:53.686195 | orchestrator | 2025-07-12 14:00:53 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:00:53.686488 | orchestrator | 2025-07-12 14:00:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:56.714719 | orchestrator | 2025-07-12 14:00:56 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 14:00:56.716281 | orchestrator | 2025-07-12 14:00:56 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:00:56.716957 | orchestrator | 2025-07-12 14:00:56 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:00:56.717679 | orchestrator | 2025-07-12 14:00:56 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:00:56.717773 | orchestrator | 2025-07-12 14:00:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:59.759628 | orchestrator | 2025-07-12 14:00:59 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 14:00:59.759870 | orchestrator | 2025-07-12 14:00:59 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:00:59.763100 | orchestrator | 2025-07-12 14:00:59 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:00:59.763725 | orchestrator | 2025-07-12 14:00:59 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:00:59.763762 | orchestrator | 2025-07-12 14:00:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:02.814608 | orchestrator | 2025-07-12 14:01:02 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 14:01:02.814713 | orchestrator | 2025-07-12 14:01:02 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:01:02.815121 | orchestrator | 2025-07-12 14:01:02 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:01:02.815682 | orchestrator | 2025-07-12 14:01:02 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:01:02.815703 | orchestrator | 2025-07-12 14:01:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:05.845152 | orchestrator | 2025-07-12 14:01:05 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 14:01:05.845254 | orchestrator | 2025-07-12 14:01:05 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:01:05.845315 | orchestrator | 2025-07-12 14:01:05 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:01:05.845448 | orchestrator | 2025-07-12 14:01:05 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:01:05.845541 | orchestrator | 2025-07-12 14:01:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:08.875531 | orchestrator | 2025-07-12 14:01:08 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 14:01:08.876085 | orchestrator | 2025-07-12 14:01:08 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:01:08.876504 | orchestrator | 2025-07-12 14:01:08 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:01:08.876831 | orchestrator | 2025-07-12 14:01:08 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:01:08.876985 | orchestrator | 2025-07-12 14:01:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:11.928180 | orchestrator | 2025-07-12 14:01:11 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 14:01:11.929492 | orchestrator | 2025-07-12 14:01:11 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:01:11.932168 | orchestrator | 2025-07-12 14:01:11 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:01:11.933806 | orchestrator | 2025-07-12 14:01:11 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:01:11.933942 | orchestrator | 2025-07-12 14:01:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:14.972173 | orchestrator | 2025-07-12 14:01:14 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 14:01:14.975418 | orchestrator | 2025-07-12 14:01:14 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:01:14.977152 | orchestrator | 2025-07-12 14:01:14 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:01:14.979782 | orchestrator | 2025-07-12 14:01:14 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:01:14.979819 | orchestrator | 2025-07-12 14:01:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:18.025154 | orchestrator | 2025-07-12 14:01:18 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 14:01:18.025837 | orchestrator | 2025-07-12 14:01:18 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:01:18.026942 | orchestrator | 2025-07-12 14:01:18 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:01:18.028317 | orchestrator | 2025-07-12 14:01:18 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:01:18.028355 | orchestrator | 2025-07-12 14:01:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:21.077096 | orchestrator | 2025-07-12 14:01:21 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state STARTED 2025-07-12 14:01:21.078533 | orchestrator | 2025-07-12 14:01:21 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:01:21.080723 | orchestrator | 2025-07-12 14:01:21 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:01:21.082304 | orchestrator | 2025-07-12 14:01:21 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:01:21.082342 | orchestrator | 2025-07-12 14:01:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:24.126883 | orchestrator | 2025-07-12 14:01:24 | INFO  | Task f382daf1-0e6d-4c48-8a24-471257a73cee is in state SUCCESS 2025-07-12 14:01:24.128264 | orchestrator | 2025-07-12 14:01:24.128403 | orchestrator | 2025-07-12 14:01:24.128472 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 14:01:24.128488 | orchestrator | 2025-07-12 14:01:24.128499 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 14:01:24.128537 | orchestrator | Saturday 12 July 2025 14:00:13 +0000 (0:00:00.570) 0:00:00.570 ********* 2025-07-12 14:01:24.128549 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:01:24.128589 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:01:24.128600 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:01:24.128696 | orchestrator | 2025-07-12 14:01:24.128710 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 14:01:24.128721 | orchestrator | Saturday 12 July 2025 14:00:13 +0000 (0:00:00.461) 0:00:01.032 ********* 2025-07-12 14:01:24.128779 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-07-12 14:01:24.128793 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-07-12 14:01:24.128806 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-07-12 14:01:24.128819 | orchestrator | 2025-07-12 14:01:24.128831 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-07-12 14:01:24.128843 | orchestrator | 2025-07-12 14:01:24.128855 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-07-12 14:01:24.128868 | orchestrator | Saturday 12 July 2025 14:00:14 +0000 (0:00:00.882) 0:00:01.914 ********* 2025-07-12 14:01:24.128880 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:01:24.128892 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:01:24.128904 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:01:24.128917 | orchestrator | 2025-07-12 14:01:24.128929 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:01:24.128942 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:01:24.128956 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:01:24.128969 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:01:24.128981 | orchestrator | 2025-07-12 14:01:24.128993 | orchestrator | 2025-07-12 14:01:24.129005 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:01:24.129017 | orchestrator | Saturday 12 July 2025 14:00:15 +0000 (0:00:00.910) 0:00:02.824 ********* 2025-07-12 14:01:24.129030 | orchestrator | =============================================================================== 2025-07-12 14:01:24.129042 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.91s 2025-07-12 14:01:24.129054 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.88s 2025-07-12 14:01:24.129068 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.46s 2025-07-12 14:01:24.129080 | orchestrator | 2025-07-12 14:01:24.129092 | orchestrator | 2025-07-12 14:01:24.129135 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 14:01:24.129148 | orchestrator | 2025-07-12 14:01:24.129161 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 14:01:24.129250 | orchestrator | Saturday 12 July 2025 13:58:06 +0000 (0:00:00.532) 0:00:00.532 ********* 2025-07-12 14:01:24.129264 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:01:24.129275 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:01:24.129314 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:01:24.129326 | orchestrator | 2025-07-12 14:01:24.129337 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 14:01:24.129348 | orchestrator | Saturday 12 July 2025 13:58:06 +0000 (0:00:00.429) 0:00:00.961 ********* 2025-07-12 14:01:24.129359 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-07-12 14:01:24.129370 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-07-12 14:01:24.129380 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-07-12 14:01:24.129391 | orchestrator | 2025-07-12 14:01:24.129402 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-07-12 14:01:24.129412 | orchestrator | 2025-07-12 14:01:24.129423 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-12 14:01:24.129434 | orchestrator | Saturday 12 July 2025 13:58:07 +0000 (0:00:00.440) 0:00:01.402 ********* 2025-07-12 14:01:24.129445 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:01:24.129456 | orchestrator | 2025-07-12 14:01:24.129466 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-07-12 14:01:24.129477 | orchestrator | Saturday 12 July 2025 13:58:07 +0000 (0:00:00.525) 0:00:01.928 ********* 2025-07-12 14:01:24.129488 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-07-12 14:01:24.129498 | orchestrator | 2025-07-12 14:01:24.129509 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-07-12 14:01:24.129520 | orchestrator | Saturday 12 July 2025 13:58:11 +0000 (0:00:03.678) 0:00:05.607 ********* 2025-07-12 14:01:24.129530 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-07-12 14:01:24.129541 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-07-12 14:01:24.129552 | orchestrator | 2025-07-12 14:01:24.129563 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-07-12 14:01:24.129574 | orchestrator | Saturday 12 July 2025 13:58:18 +0000 (0:00:07.093) 0:00:12.700 ********* 2025-07-12 14:01:24.129585 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating projects (5 retries left). 2025-07-12 14:01:24.129595 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 14:01:24.129606 | orchestrator | 2025-07-12 14:01:24.129617 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-07-12 14:01:24.129648 | orchestrator | Saturday 12 July 2025 13:58:34 +0000 (0:00:15.781) 0:00:28.482 ********* 2025-07-12 14:01:24.129660 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 14:01:24.129671 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-07-12 14:01:24.129682 | orchestrator | 2025-07-12 14:01:24.129693 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-07-12 14:01:24.129703 | orchestrator | Saturday 12 July 2025 13:58:37 +0000 (0:00:03.327) 0:00:31.809 ********* 2025-07-12 14:01:24.129714 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 14:01:24.129725 | orchestrator | 2025-07-12 14:01:24.129736 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-07-12 14:01:24.129746 | orchestrator | Saturday 12 July 2025 13:58:41 +0000 (0:00:03.505) 0:00:35.314 ********* 2025-07-12 14:01:24.129757 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-07-12 14:01:24.129767 | orchestrator | 2025-07-12 14:01:24.129778 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-07-12 14:01:24.129789 | orchestrator | Saturday 12 July 2025 13:58:45 +0000 (0:00:04.355) 0:00:39.669 ********* 2025-07-12 14:01:24.129811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 14:01:24.129827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 14:01:24.129854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 14:01:24.129889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:24.129904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:24.129922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:24.129934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.129946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.129957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.129969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.129994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.130006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.130084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.130099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.130110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.130122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.130133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.130159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.130178 | orchestrator | 2025-07-12 14:01:24.130190 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-07-12 14:01:24.130201 | orchestrator | Saturday 12 July 2025 13:58:48 +0000 (0:00:03.366) 0:00:43.035 ********* 2025-07-12 14:01:24.130212 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:01:24.130223 | orchestrator | 2025-07-12 14:01:24.130233 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-07-12 14:01:24.130244 | orchestrator | Saturday 12 July 2025 13:58:48 +0000 (0:00:00.134) 0:00:43.169 ********* 2025-07-12 14:01:24.130254 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:01:24.130265 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:01:24.130276 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:01:24.130303 | orchestrator | 2025-07-12 14:01:24.130314 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-12 14:01:24.130325 | orchestrator | Saturday 12 July 2025 13:58:49 +0000 (0:00:00.259) 0:00:43.429 ********* 2025-07-12 14:01:24.130336 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:01:24.130347 | orchestrator | 2025-07-12 14:01:24.130358 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-07-12 14:01:24.130368 | orchestrator | Saturday 12 July 2025 13:58:49 +0000 (0:00:00.572) 0:00:44.001 ********* 2025-07-12 14:01:24.130380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 14:01:24.130392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 14:01:24.130404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 14:01:24.130442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:24.130454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:24.130465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:24.130477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.130488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.130500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.130529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.130542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.130553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.130564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.130575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.130586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.130597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.130627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.130640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.130651 | orchestrator | 2025-07-12 14:01:24.130663 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-07-12 14:01:24.130674 | orchestrator | Saturday 12 July 2025 13:58:55 +0000 (0:00:05.950) 0:00:49.952 ********* 2025-07-12 14:01:24.130685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 14:01:24.130697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 14:01:24.130708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.130725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.131366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.131477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.131507 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:01:24.131530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 14:01:24.131551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 14:01:24.131572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.131618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.131659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.131672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.131683 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:01:24.131695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 14:01:24.131707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 14:01:24.131718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.131736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.131764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.131778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.131791 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:01:24.131804 | orchestrator | 2025-07-12 14:01:24.131818 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-07-12 14:01:24.131833 | orchestrator | Saturday 12 July 2025 13:58:57 +0000 (0:00:01.904) 0:00:51.856 ********* 2025-07-12 14:01:24.131847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 14:01:24.131860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 14:01:24.131879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.131892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.131919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.131932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.131943 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:01:24.131955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 14:01:24.131967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 14:01:24.131984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.131996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.132019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.132031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.132042 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:01:24.132053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 14:01:24.132065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 14:01:24.132082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.132094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.132116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.132129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.132140 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:01:24.132151 | orchestrator | 2025-07-12 14:01:24.132162 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-07-12 14:01:24.132172 | orchestrator | Saturday 12 July 2025 13:58:59 +0000 (0:00:02.124) 0:00:53.980 ********* 2025-07-12 14:01:24.132190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 14:01:24.132219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 14:01:24.132239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 14:01:24.132276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:24.132329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:24.132348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:24.132379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.132398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.132418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.132464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.132485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.132505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.132523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.132555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.132576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.132597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.132624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.132636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.132647 | orchestrator | 2025-07-12 14:01:24.132659 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-07-12 14:01:24.132670 | orchestrator | Saturday 12 July 2025 13:59:05 +0000 (0:00:06.075) 0:01:00.056 ********* 2025-07-12 14:01:24.132681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 14:01:24.132700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 14:01:24.132712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 14:01:24.132734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:24.132747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:24.132758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:24.132776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.132787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.132799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.132810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.132834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.132847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.132864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.132875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.132887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.132898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.132910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.132932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.132946 | orchestrator | 2025-07-12 14:01:24.132965 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-07-12 14:01:24.132993 | orchestrator | Saturday 12 July 2025 13:59:28 +0000 (0:00:22.441) 0:01:22.498 ********* 2025-07-12 14:01:24.133012 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-07-12 14:01:24.133034 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-07-12 14:01:24.133052 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-07-12 14:01:24.133068 | orchestrator | 2025-07-12 14:01:24.133079 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-07-12 14:01:24.133090 | orchestrator | Saturday 12 July 2025 13:59:34 +0000 (0:00:06.175) 0:01:28.674 ********* 2025-07-12 14:01:24.133101 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-07-12 14:01:24.133112 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-07-12 14:01:24.133122 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-07-12 14:01:24.133133 | orchestrator | 2025-07-12 14:01:24.133144 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-07-12 14:01:24.133155 | orchestrator | Saturday 12 July 2025 13:59:37 +0000 (0:00:03.352) 0:01:32.026 ********* 2025-07-12 14:01:24.133167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 14:01:24.133179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 14:01:24.133203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 14:01:24.133223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:24.133235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.133246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.133258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.133270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:24.133282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.133355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.133375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.133386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:24.133397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.133409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.133420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.133442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.133465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.133476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.133487 | orchestrator | 2025-07-12 14:01:24.133499 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-07-12 14:01:24.133510 | orchestrator | Saturday 12 July 2025 13:59:41 +0000 (0:00:03.358) 0:01:35.385 ********* 2025-07-12 14:01:24.133521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 14:01:24.133533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 14:01:24.133556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 14:01:24.133574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:24.133586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.133598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.133609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:24.133620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.133632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.133738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.133754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.133765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:24.133777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.133788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.133800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.133819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.133844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.133856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.133867 | orchestrator | 2025-07-12 14:01:24.133878 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-12 14:01:24.133889 | orchestrator | Saturday 12 July 2025 13:59:43 +0000 (0:00:02.613) 0:01:37.998 ********* 2025-07-12 14:01:24.133901 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:01:24.133911 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:01:24.133922 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:01:24.133933 | orchestrator | 2025-07-12 14:01:24.133944 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-07-12 14:01:24.133955 | orchestrator | Saturday 12 July 2025 13:59:44 +0000 (0:00:00.850) 0:01:38.849 ********* 2025-07-12 14:01:24.133966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 14:01:24.133978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 14:01:24.133996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.134067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 14:01:24.134085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.134097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 14:01:24.134109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.134120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.134140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.134151 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:01:24.134175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.134187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.134199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.134210 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:01:24.134221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 14:01:24.134232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 14:01:24.134249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.134272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.134305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.134317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:01:24.134328 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:01:24.134339 | orchestrator | 2025-07-12 14:01:24.134350 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-07-12 14:01:24.134361 | orchestrator | Saturday 12 July 2025 13:59:46 +0000 (0:00:01.804) 0:01:40.653 ********* 2025-07-12 14:01:24.134372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 14:01:24.134390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 14:01:24.134413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 14:01:24.134425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:24.134436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:24.134448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:24.134459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.134482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.134493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.134515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.134527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.134538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.134550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.134571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.134583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.134594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.134613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.134625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:24.134636 | orchestrator | 2025-07-12 14:01:24.134648 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-12 14:01:24.134659 | orchestrator | Saturday 12 July 2025 13:59:50 +0000 (0:00:04.548) 0:01:45.202 ********* 2025-07-12 14:01:24.134670 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:01:24.134681 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:01:24.134691 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:01:24.134702 | orchestrator | 2025-07-12 14:01:24.134713 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-07-12 14:01:24.134741 | orchestrator | Saturday 12 July 2025 13:59:51 +0000 (0:00:00.244) 0:01:45.446 ********* 2025-07-12 14:01:24.134761 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-07-12 14:01:24.134779 | orchestrator | 2025-07-12 14:01:24.134797 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-07-12 14:01:24.134815 | orchestrator | Saturday 12 July 2025 13:59:53 +0000 (0:00:02.591) 0:01:48.038 ********* 2025-07-12 14:01:24.134835 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 14:01:24.134854 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-07-12 14:01:24.134873 | orchestrator | 2025-07-12 14:01:24.134889 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-07-12 14:01:24.134933 | orchestrator | Saturday 12 July 2025 13:59:55 +0000 (0:00:02.123) 0:01:50.162 ********* 2025-07-12 14:01:24.134946 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:01:24.134957 | orchestrator | 2025-07-12 14:01:24.134968 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-07-12 14:01:24.134979 | orchestrator | Saturday 12 July 2025 14:00:10 +0000 (0:00:14.385) 0:02:04.548 ********* 2025-07-12 14:01:24.134989 | orchestrator | 2025-07-12 14:01:24.135000 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-07-12 14:01:24.135010 | orchestrator | Saturday 12 July 2025 14:00:10 +0000 (0:00:00.102) 0:02:04.650 ********* 2025-07-12 14:01:24.135021 | orchestrator | 2025-07-12 14:01:24.135032 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-07-12 14:01:24.135043 | orchestrator | Saturday 12 July 2025 14:00:10 +0000 (0:00:00.066) 0:02:04.716 ********* 2025-07-12 14:01:24.135054 | orchestrator | 2025-07-12 14:01:24.135065 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-07-12 14:01:24.135076 | orchestrator | Saturday 12 July 2025 14:00:10 +0000 (0:00:00.066) 0:02:04.783 ********* 2025-07-12 14:01:24.135087 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:01:24.135098 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:01:24.135117 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:01:24.135135 | orchestrator | 2025-07-12 14:01:24.135154 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-07-12 14:01:24.135171 | orchestrator | Saturday 12 July 2025 14:00:25 +0000 (0:00:14.759) 0:02:19.543 ********* 2025-07-12 14:01:24.135188 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:01:24.135205 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:01:24.135224 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:01:24.135243 | orchestrator | 2025-07-12 14:01:24.135261 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-07-12 14:01:24.135281 | orchestrator | Saturday 12 July 2025 14:00:36 +0000 (0:00:11.512) 0:02:31.056 ********* 2025-07-12 14:01:24.135377 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:01:24.135397 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:01:24.135415 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:01:24.135425 | orchestrator | 2025-07-12 14:01:24.135435 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-07-12 14:01:24.135444 | orchestrator | Saturday 12 July 2025 14:00:46 +0000 (0:00:09.791) 0:02:40.847 ********* 2025-07-12 14:01:24.135454 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:01:24.135464 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:01:24.135473 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:01:24.135483 | orchestrator | 2025-07-12 14:01:24.135492 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-07-12 14:01:24.135502 | orchestrator | Saturday 12 July 2025 14:00:52 +0000 (0:00:05.860) 0:02:46.708 ********* 2025-07-12 14:01:24.135512 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:01:24.135522 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:01:24.135531 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:01:24.135541 | orchestrator | 2025-07-12 14:01:24.135567 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-07-12 14:01:24.135589 | orchestrator | Saturday 12 July 2025 14:01:02 +0000 (0:00:10.126) 0:02:56.834 ********* 2025-07-12 14:01:24.135598 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:01:24.135608 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:01:24.135617 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:01:24.135627 | orchestrator | 2025-07-12 14:01:24.135636 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-07-12 14:01:24.135646 | orchestrator | Saturday 12 July 2025 14:01:16 +0000 (0:00:13.511) 0:03:10.346 ********* 2025-07-12 14:01:24.135655 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:01:24.135665 | orchestrator | 2025-07-12 14:01:24.135674 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:01:24.135684 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-12 14:01:24.135695 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 14:01:24.135705 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 14:01:24.135715 | orchestrator | 2025-07-12 14:01:24.135725 | orchestrator | 2025-07-12 14:01:24.135735 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:01:24.135744 | orchestrator | Saturday 12 July 2025 14:01:23 +0000 (0:00:07.182) 0:03:17.529 ********* 2025-07-12 14:01:24.135754 | orchestrator | =============================================================================== 2025-07-12 14:01:24.135763 | orchestrator | designate : Copying over designate.conf -------------------------------- 22.44s 2025-07-12 14:01:24.135773 | orchestrator | service-ks-register : designate | Creating projects -------------------- 15.78s 2025-07-12 14:01:24.135782 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 14.76s 2025-07-12 14:01:24.135792 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.39s 2025-07-12 14:01:24.135801 | orchestrator | designate : Restart designate-worker container ------------------------- 13.51s 2025-07-12 14:01:24.135811 | orchestrator | designate : Restart designate-api container ---------------------------- 11.51s 2025-07-12 14:01:24.135820 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.13s 2025-07-12 14:01:24.135829 | orchestrator | designate : Restart designate-central container ------------------------- 9.79s 2025-07-12 14:01:24.135839 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.18s 2025-07-12 14:01:24.135848 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.09s 2025-07-12 14:01:24.135858 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.18s 2025-07-12 14:01:24.135867 | orchestrator | designate : Copying over config.json files for services ----------------- 6.07s 2025-07-12 14:01:24.135876 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.95s 2025-07-12 14:01:24.135886 | orchestrator | designate : Restart designate-producer container ------------------------ 5.86s 2025-07-12 14:01:24.135896 | orchestrator | designate : Check designate containers ---------------------------------- 4.55s 2025-07-12 14:01:24.135905 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.36s 2025-07-12 14:01:24.135914 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.68s 2025-07-12 14:01:24.135924 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.51s 2025-07-12 14:01:24.135933 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.37s 2025-07-12 14:01:24.135943 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.36s 2025-07-12 14:01:24.135953 | orchestrator | 2025-07-12 14:01:24 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:01:24.135964 | orchestrator | 2025-07-12 14:01:24 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:01:24.135989 | orchestrator | 2025-07-12 14:01:24 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:01:24.136006 | orchestrator | 2025-07-12 14:01:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:27.161504 | orchestrator | 2025-07-12 14:01:27 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:01:27.161979 | orchestrator | 2025-07-12 14:01:27 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:01:27.164109 | orchestrator | 2025-07-12 14:01:27 | INFO  | Task 9535279f-350d-4a02-83ee-add14ffebcf3 is in state STARTED 2025-07-12 14:01:27.164862 | orchestrator | 2025-07-12 14:01:27 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:01:27.164884 | orchestrator | 2025-07-12 14:01:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:30.194279 | orchestrator | 2025-07-12 14:01:30 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:01:30.194451 | orchestrator | 2025-07-12 14:01:30 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:01:30.194945 | orchestrator | 2025-07-12 14:01:30 | INFO  | Task 9535279f-350d-4a02-83ee-add14ffebcf3 is in state STARTED 2025-07-12 14:01:30.195600 | orchestrator | 2025-07-12 14:01:30 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:01:30.195619 | orchestrator | 2025-07-12 14:01:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:33.244944 | orchestrator | 2025-07-12 14:01:33 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:01:33.245052 | orchestrator | 2025-07-12 14:01:33 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:01:33.246214 | orchestrator | 2025-07-12 14:01:33 | INFO  | Task 9535279f-350d-4a02-83ee-add14ffebcf3 is in state STARTED 2025-07-12 14:01:33.246781 | orchestrator | 2025-07-12 14:01:33 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:01:33.247057 | orchestrator | 2025-07-12 14:01:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:36.277725 | orchestrator | 2025-07-12 14:01:36 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:01:36.277831 | orchestrator | 2025-07-12 14:01:36 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:01:36.279054 | orchestrator | 2025-07-12 14:01:36 | INFO  | Task 9535279f-350d-4a02-83ee-add14ffebcf3 is in state STARTED 2025-07-12 14:01:36.281510 | orchestrator | 2025-07-12 14:01:36 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:01:36.281535 | orchestrator | 2025-07-12 14:01:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:39.332535 | orchestrator | 2025-07-12 14:01:39 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:01:39.335760 | orchestrator | 2025-07-12 14:01:39 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:01:39.338461 | orchestrator | 2025-07-12 14:01:39 | INFO  | Task 9535279f-350d-4a02-83ee-add14ffebcf3 is in state STARTED 2025-07-12 14:01:39.339937 | orchestrator | 2025-07-12 14:01:39 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:01:39.340194 | orchestrator | 2025-07-12 14:01:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:42.388406 | orchestrator | 2025-07-12 14:01:42 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:01:42.389574 | orchestrator | 2025-07-12 14:01:42 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:01:42.392639 | orchestrator | 2025-07-12 14:01:42 | INFO  | Task 9535279f-350d-4a02-83ee-add14ffebcf3 is in state STARTED 2025-07-12 14:01:42.393114 | orchestrator | 2025-07-12 14:01:42 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:01:42.393147 | orchestrator | 2025-07-12 14:01:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:45.429662 | orchestrator | 2025-07-12 14:01:45 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:01:45.430065 | orchestrator | 2025-07-12 14:01:45 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:01:45.430963 | orchestrator | 2025-07-12 14:01:45 | INFO  | Task 9535279f-350d-4a02-83ee-add14ffebcf3 is in state STARTED 2025-07-12 14:01:45.431980 | orchestrator | 2025-07-12 14:01:45 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:01:45.431989 | orchestrator | 2025-07-12 14:01:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:48.480716 | orchestrator | 2025-07-12 14:01:48 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:01:48.481273 | orchestrator | 2025-07-12 14:01:48 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:01:48.482052 | orchestrator | 2025-07-12 14:01:48 | INFO  | Task 9535279f-350d-4a02-83ee-add14ffebcf3 is in state STARTED 2025-07-12 14:01:48.482822 | orchestrator | 2025-07-12 14:01:48 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:01:48.482846 | orchestrator | 2025-07-12 14:01:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:51.519466 | orchestrator | 2025-07-12 14:01:51 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:01:51.520048 | orchestrator | 2025-07-12 14:01:51 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:01:51.521710 | orchestrator | 2025-07-12 14:01:51 | INFO  | Task 9535279f-350d-4a02-83ee-add14ffebcf3 is in state STARTED 2025-07-12 14:01:51.522273 | orchestrator | 2025-07-12 14:01:51 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:01:51.522414 | orchestrator | 2025-07-12 14:01:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:54.560246 | orchestrator | 2025-07-12 14:01:54 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:01:54.560370 | orchestrator | 2025-07-12 14:01:54 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:01:54.561752 | orchestrator | 2025-07-12 14:01:54 | INFO  | Task 9535279f-350d-4a02-83ee-add14ffebcf3 is in state STARTED 2025-07-12 14:01:54.561781 | orchestrator | 2025-07-12 14:01:54 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:01:54.561793 | orchestrator | 2025-07-12 14:01:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:57.596298 | orchestrator | 2025-07-12 14:01:57 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:01:57.598256 | orchestrator | 2025-07-12 14:01:57 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:01:57.600494 | orchestrator | 2025-07-12 14:01:57 | INFO  | Task 9535279f-350d-4a02-83ee-add14ffebcf3 is in state STARTED 2025-07-12 14:01:57.601899 | orchestrator | 2025-07-12 14:01:57 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:01:57.601955 | orchestrator | 2025-07-12 14:01:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:00.659402 | orchestrator | 2025-07-12 14:02:00 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:02:00.659870 | orchestrator | 2025-07-12 14:02:00 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:02:00.663704 | orchestrator | 2025-07-12 14:02:00 | INFO  | Task 9535279f-350d-4a02-83ee-add14ffebcf3 is in state STARTED 2025-07-12 14:02:00.666232 | orchestrator | 2025-07-12 14:02:00 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:02:00.666407 | orchestrator | 2025-07-12 14:02:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:03.713603 | orchestrator | 2025-07-12 14:02:03 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:02:03.715511 | orchestrator | 2025-07-12 14:02:03 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:02:03.716730 | orchestrator | 2025-07-12 14:02:03 | INFO  | Task 9535279f-350d-4a02-83ee-add14ffebcf3 is in state SUCCESS 2025-07-12 14:02:03.719646 | orchestrator | 2025-07-12 14:02:03 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state STARTED 2025-07-12 14:02:03.720516 | orchestrator | 2025-07-12 14:02:03 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:02:03.720785 | orchestrator | 2025-07-12 14:02:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:06.763222 | orchestrator | 2025-07-12 14:02:06 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:02:06.763848 | orchestrator | 2025-07-12 14:02:06 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:02:06.765237 | orchestrator | 2025-07-12 14:02:06 | INFO  | Task 2f1dd541-1e6c-437f-a30c-9f785d73a8b1 is in state SUCCESS 2025-07-12 14:02:06.767016 | orchestrator | 2025-07-12 14:02:06.767054 | orchestrator | 2025-07-12 14:02:06.767067 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 14:02:06.767079 | orchestrator | 2025-07-12 14:02:06.767091 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 14:02:06.767103 | orchestrator | Saturday 12 July 2025 14:01:28 +0000 (0:00:00.268) 0:00:00.269 ********* 2025-07-12 14:02:06.767145 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:02:06.767158 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:02:06.767226 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:02:06.767241 | orchestrator | ok: [testbed-manager] 2025-07-12 14:02:06.767252 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:02:06.767262 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:02:06.767273 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:02:06.767283 | orchestrator | 2025-07-12 14:02:06.767294 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 14:02:06.767327 | orchestrator | Saturday 12 July 2025 14:01:29 +0000 (0:00:00.931) 0:00:01.200 ********* 2025-07-12 14:02:06.767613 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-07-12 14:02:06.767633 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-07-12 14:02:06.767644 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-07-12 14:02:06.767655 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-07-12 14:02:06.767666 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-07-12 14:02:06.767676 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-07-12 14:02:06.767703 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-07-12 14:02:06.767715 | orchestrator | 2025-07-12 14:02:06.767725 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-07-12 14:02:06.767736 | orchestrator | 2025-07-12 14:02:06.767747 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-07-12 14:02:06.767781 | orchestrator | Saturday 12 July 2025 14:01:30 +0000 (0:00:00.853) 0:00:02.054 ********* 2025-07-12 14:02:06.767794 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 14:02:06.767806 | orchestrator | 2025-07-12 14:02:06.767817 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-07-12 14:02:06.767827 | orchestrator | Saturday 12 July 2025 14:01:31 +0000 (0:00:01.306) 0:00:03.360 ********* 2025-07-12 14:02:06.767838 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-07-12 14:02:06.767849 | orchestrator | 2025-07-12 14:02:06.767859 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-07-12 14:02:06.767870 | orchestrator | Saturday 12 July 2025 14:01:35 +0000 (0:00:03.285) 0:00:06.646 ********* 2025-07-12 14:02:06.767883 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-07-12 14:02:06.767895 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-07-12 14:02:06.767906 | orchestrator | 2025-07-12 14:02:06.767937 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-07-12 14:02:06.767949 | orchestrator | Saturday 12 July 2025 14:01:41 +0000 (0:00:06.559) 0:00:13.205 ********* 2025-07-12 14:02:06.767959 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 14:02:06.767970 | orchestrator | 2025-07-12 14:02:06.767981 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-07-12 14:02:06.767991 | orchestrator | Saturday 12 July 2025 14:01:44 +0000 (0:00:03.324) 0:00:16.530 ********* 2025-07-12 14:02:06.768002 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 14:02:06.768012 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-07-12 14:02:06.768023 | orchestrator | 2025-07-12 14:02:06.768033 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-07-12 14:02:06.768044 | orchestrator | Saturday 12 July 2025 14:01:48 +0000 (0:00:03.942) 0:00:20.473 ********* 2025-07-12 14:02:06.768054 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 14:02:06.768065 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-07-12 14:02:06.768076 | orchestrator | 2025-07-12 14:02:06.768086 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-07-12 14:02:06.768097 | orchestrator | Saturday 12 July 2025 14:01:55 +0000 (0:00:06.763) 0:00:27.236 ********* 2025-07-12 14:02:06.768108 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-07-12 14:02:06.768118 | orchestrator | 2025-07-12 14:02:06.768128 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:02:06.768139 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:02:06.768150 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:02:06.768161 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:02:06.768172 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:02:06.768183 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:02:06.768205 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:02:06.768216 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:02:06.768236 | orchestrator | 2025-07-12 14:02:06.768246 | orchestrator | 2025-07-12 14:02:06.768257 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:02:06.768267 | orchestrator | Saturday 12 July 2025 14:02:00 +0000 (0:00:04.886) 0:00:32.122 ********* 2025-07-12 14:02:06.768278 | orchestrator | =============================================================================== 2025-07-12 14:02:06.768289 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.76s 2025-07-12 14:02:06.768299 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.56s 2025-07-12 14:02:06.768334 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.89s 2025-07-12 14:02:06.768345 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.94s 2025-07-12 14:02:06.768355 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.33s 2025-07-12 14:02:06.768366 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.29s 2025-07-12 14:02:06.768376 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.31s 2025-07-12 14:02:06.768392 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.93s 2025-07-12 14:02:06.768403 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.85s 2025-07-12 14:02:06.768413 | orchestrator | 2025-07-12 14:02:06.768424 | orchestrator | 2025-07-12 14:02:06.768434 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 14:02:06.768445 | orchestrator | 2025-07-12 14:02:06.768455 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 14:02:06.768466 | orchestrator | Saturday 12 July 2025 14:00:05 +0000 (0:00:00.525) 0:00:00.525 ********* 2025-07-12 14:02:06.768476 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:02:06.768487 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:02:06.768497 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:02:06.768508 | orchestrator | 2025-07-12 14:02:06.768519 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 14:02:06.768529 | orchestrator | Saturday 12 July 2025 14:00:05 +0000 (0:00:00.452) 0:00:00.978 ********* 2025-07-12 14:02:06.768540 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-07-12 14:02:06.768550 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-07-12 14:02:06.768561 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-07-12 14:02:06.768571 | orchestrator | 2025-07-12 14:02:06.768582 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-07-12 14:02:06.768593 | orchestrator | 2025-07-12 14:02:06.768603 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-07-12 14:02:06.768613 | orchestrator | Saturday 12 July 2025 14:00:06 +0000 (0:00:00.388) 0:00:01.366 ********* 2025-07-12 14:02:06.768624 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:02:06.768635 | orchestrator | 2025-07-12 14:02:06.768645 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-07-12 14:02:06.768656 | orchestrator | Saturday 12 July 2025 14:00:07 +0000 (0:00:01.016) 0:00:02.382 ********* 2025-07-12 14:02:06.768666 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-07-12 14:02:06.768677 | orchestrator | 2025-07-12 14:02:06.768687 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-07-12 14:02:06.768698 | orchestrator | Saturday 12 July 2025 14:00:10 +0000 (0:00:03.656) 0:00:06.038 ********* 2025-07-12 14:02:06.768708 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-07-12 14:02:06.768719 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-07-12 14:02:06.768730 | orchestrator | 2025-07-12 14:02:06.768740 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-07-12 14:02:06.768758 | orchestrator | Saturday 12 July 2025 14:00:17 +0000 (0:00:06.334) 0:00:12.373 ********* 2025-07-12 14:02:06.768769 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 14:02:06.768780 | orchestrator | 2025-07-12 14:02:06.768790 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-07-12 14:02:06.768801 | orchestrator | Saturday 12 July 2025 14:00:20 +0000 (0:00:03.037) 0:00:15.411 ********* 2025-07-12 14:02:06.768811 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 14:02:06.768822 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-07-12 14:02:06.768833 | orchestrator | 2025-07-12 14:02:06.768843 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-07-12 14:02:06.768854 | orchestrator | Saturday 12 July 2025 14:00:23 +0000 (0:00:03.739) 0:00:19.150 ********* 2025-07-12 14:02:06.768865 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 14:02:06.768875 | orchestrator | 2025-07-12 14:02:06.768886 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-07-12 14:02:06.768897 | orchestrator | Saturday 12 July 2025 14:00:27 +0000 (0:00:03.641) 0:00:22.792 ********* 2025-07-12 14:02:06.768922 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-07-12 14:02:06.768933 | orchestrator | 2025-07-12 14:02:06.768956 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-07-12 14:02:06.768967 | orchestrator | Saturday 12 July 2025 14:00:31 +0000 (0:00:04.117) 0:00:26.909 ********* 2025-07-12 14:02:06.768977 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:02:06.768988 | orchestrator | 2025-07-12 14:02:06.768999 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-07-12 14:02:06.769017 | orchestrator | Saturday 12 July 2025 14:00:35 +0000 (0:00:03.284) 0:00:30.194 ********* 2025-07-12 14:02:06.769028 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:02:06.769039 | orchestrator | 2025-07-12 14:02:06.769050 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-07-12 14:02:06.769060 | orchestrator | Saturday 12 July 2025 14:00:39 +0000 (0:00:04.291) 0:00:34.486 ********* 2025-07-12 14:02:06.769071 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:02:06.769081 | orchestrator | 2025-07-12 14:02:06.769092 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-07-12 14:02:06.769102 | orchestrator | Saturday 12 July 2025 14:00:42 +0000 (0:00:03.667) 0:00:38.153 ********* 2025-07-12 14:02:06.769121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:06.769137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:06.769157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:06.769169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:06.769189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:06.769206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:06.769217 | orchestrator | 2025-07-12 14:02:06.769229 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-07-12 14:02:06.769239 | orchestrator | Saturday 12 July 2025 14:00:44 +0000 (0:00:01.997) 0:00:40.150 ********* 2025-07-12 14:02:06.769250 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:02:06.769261 | orchestrator | 2025-07-12 14:02:06.769271 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-07-12 14:02:06.769289 | orchestrator | Saturday 12 July 2025 14:00:45 +0000 (0:00:00.178) 0:00:40.329 ********* 2025-07-12 14:02:06.769299 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:02:06.769347 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:02:06.769358 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:02:06.769368 | orchestrator | 2025-07-12 14:02:06.769379 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-07-12 14:02:06.769390 | orchestrator | Saturday 12 July 2025 14:00:45 +0000 (0:00:00.540) 0:00:40.869 ********* 2025-07-12 14:02:06.769401 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 14:02:06.769411 | orchestrator | 2025-07-12 14:02:06.769422 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-07-12 14:02:06.769433 | orchestrator | Saturday 12 July 2025 14:00:46 +0000 (0:00:00.972) 0:00:41.841 ********* 2025-07-12 14:02:06.769444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:06.769456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:06.769476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:06.769494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:06.769512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:06.769540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:06.769552 | orchestrator | 2025-07-12 14:02:06.769563 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-07-12 14:02:06.769574 | orchestrator | Saturday 12 July 2025 14:00:49 +0000 (0:00:02.849) 0:00:44.691 ********* 2025-07-12 14:02:06.769585 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:02:06.769596 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:02:06.769607 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:02:06.769617 | orchestrator | 2025-07-12 14:02:06.769628 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-07-12 14:02:06.769639 | orchestrator | Saturday 12 July 2025 14:00:50 +0000 (0:00:00.531) 0:00:45.222 ********* 2025-07-12 14:02:06.769650 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:02:06.769660 | orchestrator | 2025-07-12 14:02:06.769671 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-07-12 14:02:06.769682 | orchestrator | Saturday 12 July 2025 14:00:51 +0000 (0:00:01.323) 0:00:46.545 ********* 2025-07-12 14:02:06.769702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:06.769720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:06.769741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:06.769753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:06.769765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:06.769782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:06.769800 | orchestrator | 2025-07-12 14:02:06.769811 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-07-12 14:02:06.769822 | orchestrator | Saturday 12 July 2025 14:00:54 +0000 (0:00:03.557) 0:00:50.102 ********* 2025-07-12 14:02:06.769844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 14:02:06.769856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:02:06.769868 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:02:06.769879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 14:02:06.769897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:02:06.769909 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:02:06.769920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 14:02:06.769944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:02:06.769955 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:02:06.769966 | orchestrator | 2025-07-12 14:02:06.769977 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-07-12 14:02:06.769988 | orchestrator | Saturday 12 July 2025 14:00:56 +0000 (0:00:01.913) 0:00:52.016 ********* 2025-07-12 14:02:06.769999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 14:02:06.770011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:02:06.770071 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:02:06.770093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 14:02:06.770118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:02:06.770129 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:02:06.770140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 14:02:06.770152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:02:06.770163 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:02:06.770173 | orchestrator | 2025-07-12 14:02:06.770184 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-07-12 14:02:06.770195 | orchestrator | Saturday 12 July 2025 14:00:59 +0000 (0:00:02.811) 0:00:54.828 ********* 2025-07-12 14:02:06.770206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:06.770232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:06.770249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:06.770260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:06.770272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:06.770283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:06.770372 | orchestrator | 2025-07-12 14:02:06.770393 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-07-12 14:02:06.770405 | orchestrator | Saturday 12 July 2025 14:01:02 +0000 (0:00:03.261) 0:00:58.089 ********* 2025-07-12 14:02:06.770416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:06.770433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:06.770445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:06.770457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:06.770482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:06.770494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:06.770505 | orchestrator | 2025-07-12 14:02:06.770520 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-07-12 14:02:06.770531 | orchestrator | Saturday 12 July 2025 14:01:09 +0000 (0:00:07.039) 0:01:05.129 ********* 2025-07-12 14:02:06.770542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 14:02:06.770554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:02:06.770565 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:02:06.770576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 14:02:06.770601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:02:06.770612 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:02:06.770628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 14:02:06.770640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:02:06.770650 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:02:06.770661 | orchestrator | 2025-07-12 14:02:06.770672 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-07-12 14:02:06.770683 | orchestrator | Saturday 12 July 2025 14:01:10 +0000 (0:00:00.707) 0:01:05.837 ********* 2025-07-12 14:02:06.770694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:06.770718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:06.770730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:06.770746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:06.770757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:06.770768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:06.770790 | orchestrator | 2025-07-12 14:02:06.770801 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-07-12 14:02:06.770811 | orchestrator | Saturday 12 July 2025 14:01:12 +0000 (0:00:01.806) 0:01:07.643 ********* 2025-07-12 14:02:06.770822 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:02:06.770833 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:02:06.770844 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:02:06.770854 | orchestrator | 2025-07-12 14:02:06.770865 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-07-12 14:02:06.770876 | orchestrator | Saturday 12 July 2025 14:01:12 +0000 (0:00:00.247) 0:01:07.890 ********* 2025-07-12 14:02:06.770886 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:02:06.770897 | orchestrator | 2025-07-12 14:02:06.770908 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-07-12 14:02:06.770918 | orchestrator | Saturday 12 July 2025 14:01:14 +0000 (0:00:02.073) 0:01:09.964 ********* 2025-07-12 14:02:06.770929 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:02:06.770939 | orchestrator | 2025-07-12 14:02:06.770950 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-07-12 14:02:06.770967 | orchestrator | Saturday 12 July 2025 14:01:16 +0000 (0:00:02.187) 0:01:12.151 ********* 2025-07-12 14:02:06.770977 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:02:06.770986 | orchestrator | 2025-07-12 14:02:06.770995 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-07-12 14:02:06.771005 | orchestrator | Saturday 12 July 2025 14:01:32 +0000 (0:00:15.483) 0:01:27.635 ********* 2025-07-12 14:02:06.771014 | orchestrator | 2025-07-12 14:02:06.771023 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-07-12 14:02:06.771033 | orchestrator | Saturday 12 July 2025 14:01:32 +0000 (0:00:00.074) 0:01:27.710 ********* 2025-07-12 14:02:06.771042 | orchestrator | 2025-07-12 14:02:06.771052 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-07-12 14:02:06.771061 | orchestrator | Saturday 12 July 2025 14:01:32 +0000 (0:00:00.068) 0:01:27.778 ********* 2025-07-12 14:02:06.771070 | orchestrator | 2025-07-12 14:02:06.771080 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-07-12 14:02:06.771089 | orchestrator | Saturday 12 July 2025 14:01:32 +0000 (0:00:00.067) 0:01:27.845 ********* 2025-07-12 14:02:06.771098 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:02:06.771108 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:02:06.771117 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:02:06.771127 | orchestrator | 2025-07-12 14:02:06.771136 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-07-12 14:02:06.771145 | orchestrator | Saturday 12 July 2025 14:01:52 +0000 (0:00:20.226) 0:01:48.072 ********* 2025-07-12 14:02:06.771155 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:02:06.771168 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:02:06.771178 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:02:06.771187 | orchestrator | 2025-07-12 14:02:06.771197 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:02:06.771207 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 14:02:06.771217 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 14:02:06.771226 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 14:02:06.771242 | orchestrator | 2025-07-12 14:02:06.771251 | orchestrator | 2025-07-12 14:02:06.771261 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:02:06.771270 | orchestrator | Saturday 12 July 2025 14:02:06 +0000 (0:00:13.088) 0:02:01.160 ********* 2025-07-12 14:02:06.771280 | orchestrator | =============================================================================== 2025-07-12 14:02:06.771289 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 20.23s 2025-07-12 14:02:06.771299 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.48s 2025-07-12 14:02:06.771327 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 13.09s 2025-07-12 14:02:06.771338 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 7.04s 2025-07-12 14:02:06.771347 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.33s 2025-07-12 14:02:06.771356 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.29s 2025-07-12 14:02:06.771366 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.13s 2025-07-12 14:02:06.771375 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.74s 2025-07-12 14:02:06.771384 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.67s 2025-07-12 14:02:06.771394 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.66s 2025-07-12 14:02:06.771403 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.64s 2025-07-12 14:02:06.771412 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 3.56s 2025-07-12 14:02:06.771421 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.27s 2025-07-12 14:02:06.771431 | orchestrator | magnum : Copying over config.json files for services -------------------- 3.26s 2025-07-12 14:02:06.771440 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.04s 2025-07-12 14:02:06.771450 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.85s 2025-07-12 14:02:06.771459 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 2.81s 2025-07-12 14:02:06.771468 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.19s 2025-07-12 14:02:06.771478 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.07s 2025-07-12 14:02:06.771487 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 2.00s 2025-07-12 14:02:06.771497 | orchestrator | 2025-07-12 14:02:06 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:02:06.771507 | orchestrator | 2025-07-12 14:02:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:09.828035 | orchestrator | 2025-07-12 14:02:09 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:02:09.831158 | orchestrator | 2025-07-12 14:02:09 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:02:09.833757 | orchestrator | 2025-07-12 14:02:09 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:02:09.836435 | orchestrator | 2025-07-12 14:02:09 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:02:09.837011 | orchestrator | 2025-07-12 14:02:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:12.891361 | orchestrator | 2025-07-12 14:02:12 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:02:12.897186 | orchestrator | 2025-07-12 14:02:12 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:02:12.897266 | orchestrator | 2025-07-12 14:02:12 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:02:12.897966 | orchestrator | 2025-07-12 14:02:12 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:02:12.897992 | orchestrator | 2025-07-12 14:02:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:15.948286 | orchestrator | 2025-07-12 14:02:15 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:02:15.950602 | orchestrator | 2025-07-12 14:02:15 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:02:15.952643 | orchestrator | 2025-07-12 14:02:15 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:02:15.954915 | orchestrator | 2025-07-12 14:02:15 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:02:15.954949 | orchestrator | 2025-07-12 14:02:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:19.011235 | orchestrator | 2025-07-12 14:02:19 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:02:19.013082 | orchestrator | 2025-07-12 14:02:19 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:02:19.014664 | orchestrator | 2025-07-12 14:02:19 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:02:19.016786 | orchestrator | 2025-07-12 14:02:19 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:02:19.016979 | orchestrator | 2025-07-12 14:02:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:22.061486 | orchestrator | 2025-07-12 14:02:22 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:02:22.063344 | orchestrator | 2025-07-12 14:02:22 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:02:22.064891 | orchestrator | 2025-07-12 14:02:22 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:02:22.066717 | orchestrator | 2025-07-12 14:02:22 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:02:22.066789 | orchestrator | 2025-07-12 14:02:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:25.110398 | orchestrator | 2025-07-12 14:02:25 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:02:25.113789 | orchestrator | 2025-07-12 14:02:25 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:02:25.115695 | orchestrator | 2025-07-12 14:02:25 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:02:25.117531 | orchestrator | 2025-07-12 14:02:25 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:02:25.118433 | orchestrator | 2025-07-12 14:02:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:28.171019 | orchestrator | 2025-07-12 14:02:28 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:02:28.172391 | orchestrator | 2025-07-12 14:02:28 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:02:28.174229 | orchestrator | 2025-07-12 14:02:28 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:02:28.175622 | orchestrator | 2025-07-12 14:02:28 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:02:28.175646 | orchestrator | 2025-07-12 14:02:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:31.226909 | orchestrator | 2025-07-12 14:02:31 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:02:31.228580 | orchestrator | 2025-07-12 14:02:31 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:02:31.230265 | orchestrator | 2025-07-12 14:02:31 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:02:31.231086 | orchestrator | 2025-07-12 14:02:31 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:02:31.231111 | orchestrator | 2025-07-12 14:02:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:34.268462 | orchestrator | 2025-07-12 14:02:34 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:02:34.269624 | orchestrator | 2025-07-12 14:02:34 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:02:34.269656 | orchestrator | 2025-07-12 14:02:34 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:02:34.271626 | orchestrator | 2025-07-12 14:02:34 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:02:34.271651 | orchestrator | 2025-07-12 14:02:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:37.306304 | orchestrator | 2025-07-12 14:02:37 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:02:37.306620 | orchestrator | 2025-07-12 14:02:37 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:02:37.307447 | orchestrator | 2025-07-12 14:02:37 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:02:37.309356 | orchestrator | 2025-07-12 14:02:37 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:02:37.309380 | orchestrator | 2025-07-12 14:02:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:40.367615 | orchestrator | 2025-07-12 14:02:40 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:02:40.370203 | orchestrator | 2025-07-12 14:02:40 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:02:40.376675 | orchestrator | 2025-07-12 14:02:40 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:02:40.376704 | orchestrator | 2025-07-12 14:02:40 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:02:40.376717 | orchestrator | 2025-07-12 14:02:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:43.409225 | orchestrator | 2025-07-12 14:02:43 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:02:43.414283 | orchestrator | 2025-07-12 14:02:43 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:02:43.416023 | orchestrator | 2025-07-12 14:02:43 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:02:43.417694 | orchestrator | 2025-07-12 14:02:43 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:02:43.417719 | orchestrator | 2025-07-12 14:02:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:46.449860 | orchestrator | 2025-07-12 14:02:46 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:02:46.452927 | orchestrator | 2025-07-12 14:02:46 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:02:46.453024 | orchestrator | 2025-07-12 14:02:46 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:02:46.453050 | orchestrator | 2025-07-12 14:02:46 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:02:46.453063 | orchestrator | 2025-07-12 14:02:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:49.491651 | orchestrator | 2025-07-12 14:02:49 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:02:49.497134 | orchestrator | 2025-07-12 14:02:49 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:02:49.499520 | orchestrator | 2025-07-12 14:02:49 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:02:49.502223 | orchestrator | 2025-07-12 14:02:49 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:02:49.502273 | orchestrator | 2025-07-12 14:02:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:52.539893 | orchestrator | 2025-07-12 14:02:52 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:02:52.540112 | orchestrator | 2025-07-12 14:02:52 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:02:52.544230 | orchestrator | 2025-07-12 14:02:52 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:02:52.544262 | orchestrator | 2025-07-12 14:02:52 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:02:52.544274 | orchestrator | 2025-07-12 14:02:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:55.583417 | orchestrator | 2025-07-12 14:02:55 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:02:55.584747 | orchestrator | 2025-07-12 14:02:55 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:02:55.586643 | orchestrator | 2025-07-12 14:02:55 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:02:55.588219 | orchestrator | 2025-07-12 14:02:55 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:02:55.588390 | orchestrator | 2025-07-12 14:02:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:58.633413 | orchestrator | 2025-07-12 14:02:58 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:02:58.634008 | orchestrator | 2025-07-12 14:02:58 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:02:58.634967 | orchestrator | 2025-07-12 14:02:58 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:02:58.635829 | orchestrator | 2025-07-12 14:02:58 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:02:58.636890 | orchestrator | 2025-07-12 14:02:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:01.673192 | orchestrator | 2025-07-12 14:03:01 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state STARTED 2025-07-12 14:03:01.674611 | orchestrator | 2025-07-12 14:03:01 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:03:01.675246 | orchestrator | 2025-07-12 14:03:01 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:03:01.676813 | orchestrator | 2025-07-12 14:03:01 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:03:01.676852 | orchestrator | 2025-07-12 14:03:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:04.710529 | orchestrator | 2025-07-12 14:03:04 | INFO  | Task e34d2724-8ec9-4ff5-b21a-4bb12765a687 is in state SUCCESS 2025-07-12 14:03:04.711662 | orchestrator | 2025-07-12 14:03:04.711707 | orchestrator | 2025-07-12 14:03:04.711721 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 14:03:04.711733 | orchestrator | 2025-07-12 14:03:04.711744 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 14:03:04.711780 | orchestrator | Saturday 12 July 2025 13:58:05 +0000 (0:00:00.322) 0:00:00.322 ********* 2025-07-12 14:03:04.711792 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:03:04.711805 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:03:04.711815 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:03:04.711826 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:03:04.711837 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:03:04.711847 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:03:04.711858 | orchestrator | 2025-07-12 14:03:04.711869 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 14:03:04.711880 | orchestrator | Saturday 12 July 2025 13:58:06 +0000 (0:00:01.161) 0:00:01.484 ********* 2025-07-12 14:03:04.711890 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-07-12 14:03:04.711902 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-07-12 14:03:04.711912 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-07-12 14:03:04.711922 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-07-12 14:03:04.711933 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-07-12 14:03:04.711987 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-07-12 14:03:04.711999 | orchestrator | 2025-07-12 14:03:04.712010 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-07-12 14:03:04.712020 | orchestrator | 2025-07-12 14:03:04.712031 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-12 14:03:04.712042 | orchestrator | Saturday 12 July 2025 13:58:07 +0000 (0:00:00.947) 0:00:02.431 ********* 2025-07-12 14:03:04.712055 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 14:03:04.712067 | orchestrator | 2025-07-12 14:03:04.712077 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-07-12 14:03:04.712088 | orchestrator | Saturday 12 July 2025 13:58:08 +0000 (0:00:00.952) 0:00:03.383 ********* 2025-07-12 14:03:04.712103 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:03:04.712115 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:03:04.712125 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:03:04.712136 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:03:04.712146 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:03:04.712157 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:03:04.712168 | orchestrator | 2025-07-12 14:03:04.712179 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-07-12 14:03:04.712190 | orchestrator | Saturday 12 July 2025 13:58:09 +0000 (0:00:01.049) 0:00:04.433 ********* 2025-07-12 14:03:04.712201 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:03:04.712211 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:03:04.712222 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:03:04.712233 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:03:04.712244 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:03:04.712255 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:03:04.712265 | orchestrator | 2025-07-12 14:03:04.712276 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-07-12 14:03:04.712287 | orchestrator | Saturday 12 July 2025 13:58:10 +0000 (0:00:01.068) 0:00:05.502 ********* 2025-07-12 14:03:04.712298 | orchestrator | ok: [testbed-node-0] => { 2025-07-12 14:03:04.712365 | orchestrator |  "changed": false, 2025-07-12 14:03:04.712386 | orchestrator |  "msg": "All assertions passed" 2025-07-12 14:03:04.712404 | orchestrator | } 2025-07-12 14:03:04.712423 | orchestrator | ok: [testbed-node-1] => { 2025-07-12 14:03:04.712437 | orchestrator |  "changed": false, 2025-07-12 14:03:04.712448 | orchestrator |  "msg": "All assertions passed" 2025-07-12 14:03:04.712459 | orchestrator | } 2025-07-12 14:03:04.712469 | orchestrator | ok: [testbed-node-2] => { 2025-07-12 14:03:04.712480 | orchestrator |  "changed": false, 2025-07-12 14:03:04.712491 | orchestrator |  "msg": "All assertions passed" 2025-07-12 14:03:04.712501 | orchestrator | } 2025-07-12 14:03:04.712512 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 14:03:04.712534 | orchestrator |  "changed": false, 2025-07-12 14:03:04.712545 | orchestrator |  "msg": "All assertions passed" 2025-07-12 14:03:04.712555 | orchestrator | } 2025-07-12 14:03:04.712566 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 14:03:04.712577 | orchestrator |  "changed": false, 2025-07-12 14:03:04.712587 | orchestrator |  "msg": "All assertions passed" 2025-07-12 14:03:04.712598 | orchestrator | } 2025-07-12 14:03:04.712608 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 14:03:04.712619 | orchestrator |  "changed": false, 2025-07-12 14:03:04.712630 | orchestrator |  "msg": "All assertions passed" 2025-07-12 14:03:04.712640 | orchestrator | } 2025-07-12 14:03:04.712651 | orchestrator | 2025-07-12 14:03:04.712662 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-07-12 14:03:04.712673 | orchestrator | Saturday 12 July 2025 13:58:11 +0000 (0:00:00.723) 0:00:06.226 ********* 2025-07-12 14:03:04.712697 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:04.712708 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:04.712718 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:04.712729 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:04.712740 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:04.712750 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:04.712761 | orchestrator | 2025-07-12 14:03:04.712771 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-07-12 14:03:04.712782 | orchestrator | Saturday 12 July 2025 13:58:11 +0000 (0:00:00.610) 0:00:06.836 ********* 2025-07-12 14:03:04.712793 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-07-12 14:03:04.712804 | orchestrator | 2025-07-12 14:03:04.712814 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-07-12 14:03:04.712825 | orchestrator | Saturday 12 July 2025 13:58:15 +0000 (0:00:03.511) 0:00:10.348 ********* 2025-07-12 14:03:04.712836 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-07-12 14:03:04.712848 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-07-12 14:03:04.712859 | orchestrator | 2025-07-12 14:03:04.712882 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-07-12 14:03:04.712893 | orchestrator | Saturday 12 July 2025 13:58:21 +0000 (0:00:06.359) 0:00:16.707 ********* 2025-07-12 14:03:04.712904 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 14:03:04.712915 | orchestrator | 2025-07-12 14:03:04.712926 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-07-12 14:03:04.712936 | orchestrator | Saturday 12 July 2025 13:58:24 +0000 (0:00:03.143) 0:00:19.850 ********* 2025-07-12 14:03:04.712997 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 14:03:04.713008 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-07-12 14:03:04.713019 | orchestrator | 2025-07-12 14:03:04.713030 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-07-12 14:03:04.713041 | orchestrator | Saturday 12 July 2025 13:58:28 +0000 (0:00:03.784) 0:00:23.635 ********* 2025-07-12 14:03:04.713052 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 14:03:04.713062 | orchestrator | 2025-07-12 14:03:04.713074 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-07-12 14:03:04.713084 | orchestrator | Saturday 12 July 2025 13:58:31 +0000 (0:00:03.052) 0:00:26.687 ********* 2025-07-12 14:03:04.713095 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-07-12 14:03:04.713105 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-07-12 14:03:04.713116 | orchestrator | 2025-07-12 14:03:04.713126 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-12 14:03:04.713137 | orchestrator | Saturday 12 July 2025 13:58:38 +0000 (0:00:06.804) 0:00:33.491 ********* 2025-07-12 14:03:04.713147 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:04.713166 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:04.713176 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:04.713187 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:04.713197 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:04.713208 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:04.713218 | orchestrator | 2025-07-12 14:03:04.713229 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-07-12 14:03:04.713240 | orchestrator | Saturday 12 July 2025 13:58:39 +0000 (0:00:00.737) 0:00:34.229 ********* 2025-07-12 14:03:04.713251 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:04.713261 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:04.713272 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:04.713282 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:04.713293 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:04.713303 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:04.713339 | orchestrator | 2025-07-12 14:03:04.713350 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-07-12 14:03:04.713361 | orchestrator | Saturday 12 July 2025 13:58:41 +0000 (0:00:02.361) 0:00:36.590 ********* 2025-07-12 14:03:04.713372 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:03:04.713382 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:03:04.713393 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:03:04.713404 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:03:04.713414 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:03:04.713425 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:03:04.713435 | orchestrator | 2025-07-12 14:03:04.713446 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-07-12 14:03:04.713457 | orchestrator | Saturday 12 July 2025 13:58:42 +0000 (0:00:01.151) 0:00:37.742 ********* 2025-07-12 14:03:04.713468 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:04.713478 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:04.713489 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:04.713499 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:04.713510 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:04.713520 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:04.713531 | orchestrator | 2025-07-12 14:03:04.713542 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-07-12 14:03:04.713552 | orchestrator | Saturday 12 July 2025 13:58:45 +0000 (0:00:02.989) 0:00:40.731 ********* 2025-07-12 14:03:04.713571 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 14:03:04.713653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:04.713676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:04.713689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:04.713700 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 14:03:04.713719 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 14:03:04.713731 | orchestrator | 2025-07-12 14:03:04.713742 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-07-12 14:03:04.713753 | orchestrator | Saturday 12 July 2025 13:58:49 +0000 (0:00:04.009) 0:00:44.740 ********* 2025-07-12 14:03:04.713764 | orchestrator | [WARNING]: Skipped 2025-07-12 14:03:04.713775 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-07-12 14:03:04.713786 | orchestrator | due to this access issue: 2025-07-12 14:03:04.713803 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-07-12 14:03:04.713814 | orchestrator | a directory 2025-07-12 14:03:04.713825 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 14:03:04.713835 | orchestrator | 2025-07-12 14:03:04.713879 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-12 14:03:04.713892 | orchestrator | Saturday 12 July 2025 13:58:50 +0000 (0:00:00.961) 0:00:45.702 ********* 2025-07-12 14:03:04.713903 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 14:03:04.713915 | orchestrator | 2025-07-12 14:03:04.713926 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-07-12 14:03:04.713936 | orchestrator | Saturday 12 July 2025 13:58:51 +0000 (0:00:01.097) 0:00:46.799 ********* 2025-07-12 14:03:04.713955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:04.713977 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 14:03:04.713996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:04.714014 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 14:03:04.714161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:04.714175 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 14:03:04.714186 | orchestrator | 2025-07-12 14:03:04.714197 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-07-12 14:03:04.714208 | orchestrator | Saturday 12 July 2025 13:58:55 +0000 (0:00:03.263) 0:00:50.063 ********* 2025-07-12 14:03:04.714220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:04.714231 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:04.714257 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:04.714301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:04.714399 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:04.714415 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:04.714427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:04.714439 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:04.714450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:04.714461 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:04.714472 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:04.714482 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:04.714493 | orchestrator | 2025-07-12 14:03:04.714504 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-07-12 14:03:04.714514 | orchestrator | Saturday 12 July 2025 13:58:58 +0000 (0:00:02.864) 0:00:52.927 ********* 2025-07-12 14:03:04.714531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:04.714551 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:04.714604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:04.714618 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:04.714629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:04.714640 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:04.714651 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:04.714662 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:04.714673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:04.714692 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:04.714708 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:04.714720 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:04.714731 | orchestrator | 2025-07-12 14:03:04.714742 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-07-12 14:03:04.714758 | orchestrator | Saturday 12 July 2025 13:59:01 +0000 (0:00:03.552) 0:00:56.479 ********* 2025-07-12 14:03:04.714769 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:04.714780 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:04.714790 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:04.714801 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:04.714811 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:04.714822 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:04.714833 | orchestrator | 2025-07-12 14:03:04.714843 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-07-12 14:03:04.714854 | orchestrator | Saturday 12 July 2025 13:59:04 +0000 (0:00:03.255) 0:00:59.735 ********* 2025-07-12 14:03:04.714865 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:04.714875 | orchestrator | 2025-07-12 14:03:04.714886 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-07-12 14:03:04.714896 | orchestrator | Saturday 12 July 2025 13:59:04 +0000 (0:00:00.163) 0:00:59.898 ********* 2025-07-12 14:03:04.714905 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:04.714915 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:04.714924 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:04.714934 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:04.714943 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:04.714952 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:04.714962 | orchestrator | 2025-07-12 14:03:04.714971 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-07-12 14:03:04.714981 | orchestrator | Saturday 12 July 2025 13:59:05 +0000 (0:00:00.835) 0:01:00.734 ********* 2025-07-12 14:03:04.714991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:04.715008 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:04.715018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:04.715028 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:04.715042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:04.715052 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:04.715072 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:04.715089 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:04.715113 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:04.715134 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:04.715149 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:04.715174 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:04.715190 | orchestrator | 2025-07-12 14:03:04.715205 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-07-12 14:03:04.715220 | orchestrator | Saturday 12 July 2025 13:59:09 +0000 (0:00:04.090) 0:01:04.824 ********* 2025-07-12 14:03:04.715242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:04.715271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:04.715284 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 14:03:04.715295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:04.715371 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 14:03:04.715389 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 14:03:04.715399 | orchestrator | 2025-07-12 14:03:04.715409 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-07-12 14:03:04.715419 | orchestrator | Saturday 12 July 2025 13:59:14 +0000 (0:00:04.396) 0:01:09.221 ********* 2025-07-12 14:03:04.715436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:04.715447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:04.715466 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 14:03:04.715477 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 14:03:04.715491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:04.715506 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 14:03:04.715517 | orchestrator | 2025-07-12 14:03:04.715527 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-07-12 14:03:04.715536 | orchestrator | Saturday 12 July 2025 13:59:21 +0000 (0:00:07.501) 0:01:16.722 ********* 2025-07-12 14:03:04.715546 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:04.715562 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:04.715572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:04.715582 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:04.715600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:04.715610 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:04.715629 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:04.715639 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:04.715649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:04.715666 | orchestrator | 2025-07-12 14:03:04.715676 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-07-12 14:03:04.715685 | orchestrator | Saturday 12 July 2025 13:59:26 +0000 (0:00:04.339) 0:01:21.062 ********* 2025-07-12 14:03:04.715695 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:04.715704 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:03:04.715714 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:04.715723 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:04.715731 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:03:04.715738 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:03:04.715746 | orchestrator | 2025-07-12 14:03:04.715754 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-07-12 14:03:04.715761 | orchestrator | Saturday 12 July 2025 13:59:29 +0000 (0:00:02.985) 0:01:24.048 ********* 2025-07-12 14:03:04.715769 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:04.715777 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:04.715789 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:04.715801 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:04.715827 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:04.715855 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:04.715867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:04.715881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:04.715894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:04.715907 | orchestrator | 2025-07-12 14:03:04.715921 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-07-12 14:03:04.715934 | orchestrator | Saturday 12 July 2025 13:59:34 +0000 (0:00:05.272) 0:01:29.320 ********* 2025-07-12 14:03:04.715954 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:04.715968 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:04.715980 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:04.715988 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:04.715996 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:04.716003 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:04.716011 | orchestrator | 2025-07-12 14:03:04.716019 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-07-12 14:03:04.716026 | orchestrator | Saturday 12 July 2025 13:59:37 +0000 (0:00:02.809) 0:01:32.130 ********* 2025-07-12 14:03:04.716034 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:04.716042 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:04.716050 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:04.716064 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:04.716071 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:04.716079 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:04.716087 | orchestrator | 2025-07-12 14:03:04.716095 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-07-12 14:03:04.716102 | orchestrator | Saturday 12 July 2025 13:59:39 +0000 (0:00:02.426) 0:01:34.557 ********* 2025-07-12 14:03:04.716110 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:04.716118 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:04.716126 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:04.716140 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:04.716148 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:04.716155 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:04.716163 | orchestrator | 2025-07-12 14:03:04.716171 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-07-12 14:03:04.716178 | orchestrator | Saturday 12 July 2025 13:59:41 +0000 (0:00:01.809) 0:01:36.366 ********* 2025-07-12 14:03:04.716186 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:04.716194 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:04.716202 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:04.716209 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:04.716217 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:04.716225 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:04.716232 | orchestrator | 2025-07-12 14:03:04.716240 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-07-12 14:03:04.716248 | orchestrator | Saturday 12 July 2025 13:59:43 +0000 (0:00:02.035) 0:01:38.401 ********* 2025-07-12 14:03:04.716255 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:04.716263 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:04.716271 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:04.716278 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:04.716286 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:04.716293 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:04.716301 | orchestrator | 2025-07-12 14:03:04.716327 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-07-12 14:03:04.716336 | orchestrator | Saturday 12 July 2025 13:59:46 +0000 (0:00:03.237) 0:01:41.639 ********* 2025-07-12 14:03:04.716344 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:04.716352 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:04.716359 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:04.716367 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:04.716375 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:04.716382 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:04.716390 | orchestrator | 2025-07-12 14:03:04.716398 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-07-12 14:03:04.716406 | orchestrator | Saturday 12 July 2025 13:59:49 +0000 (0:00:02.442) 0:01:44.081 ********* 2025-07-12 14:03:04.716414 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-12 14:03:04.716422 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:04.716429 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-12 14:03:04.716437 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:04.716445 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-12 14:03:04.716453 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:04.716460 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-12 14:03:04.716468 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:04.716476 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-12 14:03:04.716484 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:04.716491 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-12 14:03:04.716504 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:04.716512 | orchestrator | 2025-07-12 14:03:04.716520 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-07-12 14:03:04.716528 | orchestrator | Saturday 12 July 2025 13:59:51 +0000 (0:00:02.017) 0:01:46.099 ********* 2025-07-12 14:03:04.716540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:04.716549 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:04.716563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:04.716571 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:04.716579 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:04.716587 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:04.716595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:04.716609 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:04.716617 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:04.716625 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:04.716637 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:04.716645 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:04.716652 | orchestrator | 2025-07-12 14:03:04.716660 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-07-12 14:03:04.716668 | orchestrator | Saturday 12 July 2025 13:59:54 +0000 (0:00:03.071) 0:01:49.170 ********* 2025-07-12 14:03:04.716682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:04.716691 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:04.716699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:04.716712 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:04.716720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:04.716728 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:04.716739 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:04.716748 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:04.716756 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:04.716764 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:04.716777 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:04.716786 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:04.716793 | orchestrator | 2025-07-12 14:03:04.716801 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-07-12 14:03:04.716809 | orchestrator | Saturday 12 July 2025 13:59:57 +0000 (0:00:03.386) 0:01:52.557 ********* 2025-07-12 14:03:04.716816 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:04.716824 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:04.716832 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:04.716845 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:04.716853 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:04.716861 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:04.716868 | orchestrator | 2025-07-12 14:03:04.716876 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-07-12 14:03:04.716884 | orchestrator | Saturday 12 July 2025 14:00:00 +0000 (0:00:02.390) 0:01:54.947 ********* 2025-07-12 14:03:04.716892 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:04.716899 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:04.716907 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:04.716914 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:03:04.716922 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:03:04.716930 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:03:04.716938 | orchestrator | 2025-07-12 14:03:04.716945 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-07-12 14:03:04.716953 | orchestrator | Saturday 12 July 2025 14:00:03 +0000 (0:00:03.410) 0:01:58.357 ********* 2025-07-12 14:03:04.716960 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:04.716968 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:04.716976 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:04.716984 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:04.716991 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:04.716999 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:04.717006 | orchestrator | 2025-07-12 14:03:04.717014 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-07-12 14:03:04.717022 | orchestrator | Saturday 12 July 2025 14:00:05 +0000 (0:00:02.437) 0:02:00.795 ********* 2025-07-12 14:03:04.717030 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:04.717037 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:04.717045 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:04.717053 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:04.717060 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:04.717068 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:04.717076 | orchestrator | 2025-07-12 14:03:04.717083 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-07-12 14:03:04.717091 | orchestrator | Saturday 12 July 2025 14:00:08 +0000 (0:00:02.401) 0:02:03.196 ********* 2025-07-12 14:03:04.717099 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:04.717106 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:04.717114 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:04.717122 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:04.717129 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:04.717137 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:04.717144 | orchestrator | 2025-07-12 14:03:04.717152 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-07-12 14:03:04.717160 | orchestrator | Saturday 12 July 2025 14:00:10 +0000 (0:00:02.373) 0:02:05.570 ********* 2025-07-12 14:03:04.717167 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:04.717175 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:04.717183 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:04.717190 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:04.717198 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:04.717206 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:04.717213 | orchestrator | 2025-07-12 14:03:04.717221 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-07-12 14:03:04.717232 | orchestrator | Saturday 12 July 2025 14:00:14 +0000 (0:00:04.155) 0:02:09.726 ********* 2025-07-12 14:03:04.717240 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:04.717247 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:04.717255 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:04.717263 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:04.717270 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:04.717278 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:04.717291 | orchestrator | 2025-07-12 14:03:04.717299 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-07-12 14:03:04.717321 | orchestrator | Saturday 12 July 2025 14:00:17 +0000 (0:00:03.037) 0:02:12.763 ********* 2025-07-12 14:03:04.717331 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:04.717338 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:04.717346 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:04.717354 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:04.717362 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:04.717369 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:04.717377 | orchestrator | 2025-07-12 14:03:04.717385 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-07-12 14:03:04.717393 | orchestrator | Saturday 12 July 2025 14:00:19 +0000 (0:00:01.936) 0:02:14.699 ********* 2025-07-12 14:03:04.717401 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:04.717413 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:04.717421 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:04.717428 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:04.717436 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:04.717443 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:04.717451 | orchestrator | 2025-07-12 14:03:04.717459 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-07-12 14:03:04.717467 | orchestrator | Saturday 12 July 2025 14:00:22 +0000 (0:00:02.479) 0:02:17.178 ********* 2025-07-12 14:03:04.717474 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:04.717482 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:04.717490 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:04.717497 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:04.717505 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:04.717513 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:04.717520 | orchestrator | 2025-07-12 14:03:04.717528 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-07-12 14:03:04.717536 | orchestrator | Saturday 12 July 2025 14:00:24 +0000 (0:00:02.176) 0:02:19.355 ********* 2025-07-12 14:03:04.717543 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-12 14:03:04.717551 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:04.717559 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-12 14:03:04.717567 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:04.717575 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-12 14:03:04.717582 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:04.717590 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-12 14:03:04.717598 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:04.717606 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-12 14:03:04.717613 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:04.717621 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-12 14:03:04.717629 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:04.717637 | orchestrator | 2025-07-12 14:03:04.717645 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-07-12 14:03:04.717652 | orchestrator | Saturday 12 July 2025 14:00:28 +0000 (0:00:04.304) 0:02:23.659 ********* 2025-07-12 14:03:04.717660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:04.717677 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:04.717689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:04.717698 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:04.717712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:04.717721 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:04.717729 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:04.717737 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:04.717745 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:04.717760 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:04.717768 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:04.717776 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:04.717784 | orchestrator | 2025-07-12 14:03:04.717792 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-07-12 14:03:04.717799 | orchestrator | Saturday 12 July 2025 14:00:30 +0000 (0:00:02.177) 0:02:25.837 ********* 2025-07-12 14:03:04.717811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:04.717825 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 14:03:04.717834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:04.717842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:04.717859 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 14:03:04.717868 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 14:03:04.717876 | orchestrator | 2025-07-12 14:03:04.717884 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-12 14:03:04.717895 | orchestrator | Saturday 12 July 2025 14:00:34 +0000 (0:00:04.042) 0:02:29.880 ********* 2025-07-12 14:03:04.717903 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:04.717911 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:04.717919 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:04.717927 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:04.717935 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:04.717942 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:04.717950 | orchestrator | 2025-07-12 14:03:04.717958 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-07-12 14:03:04.717965 | orchestrator | Saturday 12 July 2025 14:00:35 +0000 (0:00:00.615) 0:02:30.495 ********* 2025-07-12 14:03:04.717973 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:03:04.717981 | orchestrator | 2025-07-12 14:03:04.717988 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-07-12 14:03:04.717996 | orchestrator | Saturday 12 July 2025 14:00:37 +0000 (0:00:02.381) 0:02:32.877 ********* 2025-07-12 14:03:04.718004 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:03:04.718011 | orchestrator | 2025-07-12 14:03:04.718047 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-07-12 14:03:04.718055 | orchestrator | Saturday 12 July 2025 14:00:40 +0000 (0:00:02.267) 0:02:35.145 ********* 2025-07-12 14:03:04.718063 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:03:04.718071 | orchestrator | 2025-07-12 14:03:04.718079 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-12 14:03:04.718093 | orchestrator | Saturday 12 July 2025 14:01:22 +0000 (0:00:42.243) 0:03:17.389 ********* 2025-07-12 14:03:04.718101 | orchestrator | 2025-07-12 14:03:04.718108 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-12 14:03:04.718116 | orchestrator | Saturday 12 July 2025 14:01:22 +0000 (0:00:00.066) 0:03:17.455 ********* 2025-07-12 14:03:04.718124 | orchestrator | 2025-07-12 14:03:04.718132 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-12 14:03:04.718139 | orchestrator | Saturday 12 July 2025 14:01:22 +0000 (0:00:00.253) 0:03:17.709 ********* 2025-07-12 14:03:04.718147 | orchestrator | 2025-07-12 14:03:04.718155 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-12 14:03:04.718163 | orchestrator | Saturday 12 July 2025 14:01:22 +0000 (0:00:00.074) 0:03:17.783 ********* 2025-07-12 14:03:04.718170 | orchestrator | 2025-07-12 14:03:04.718178 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-12 14:03:04.718186 | orchestrator | Saturday 12 July 2025 14:01:22 +0000 (0:00:00.065) 0:03:17.849 ********* 2025-07-12 14:03:04.718193 | orchestrator | 2025-07-12 14:03:04.718201 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-12 14:03:04.718208 | orchestrator | Saturday 12 July 2025 14:01:22 +0000 (0:00:00.064) 0:03:17.913 ********* 2025-07-12 14:03:04.718221 | orchestrator | 2025-07-12 14:03:04.718234 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-07-12 14:03:04.718248 | orchestrator | Saturday 12 July 2025 14:01:23 +0000 (0:00:00.066) 0:03:17.979 ********* 2025-07-12 14:03:04.718261 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:03:04.718273 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:03:04.718286 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:03:04.718299 | orchestrator | 2025-07-12 14:03:04.718358 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-07-12 14:03:04.718375 | orchestrator | Saturday 12 July 2025 14:01:47 +0000 (0:00:24.769) 0:03:42.749 ********* 2025-07-12 14:03:04.718383 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:03:04.718391 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:03:04.718399 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:03:04.718407 | orchestrator | 2025-07-12 14:03:04.718415 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:03:04.718422 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-12 14:03:04.718431 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-07-12 14:03:04.718439 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-07-12 14:03:04.718447 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-07-12 14:03:04.718459 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-07-12 14:03:04.718467 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-07-12 14:03:04.718475 | orchestrator | 2025-07-12 14:03:04.718483 | orchestrator | 2025-07-12 14:03:04.718491 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:03:04.718499 | orchestrator | Saturday 12 July 2025 14:03:02 +0000 (0:01:14.227) 0:04:56.977 ********* 2025-07-12 14:03:04.718506 | orchestrator | =============================================================================== 2025-07-12 14:03:04.718519 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 74.23s 2025-07-12 14:03:04.718543 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 42.24s 2025-07-12 14:03:04.718557 | orchestrator | neutron : Restart neutron-server container ----------------------------- 24.77s 2025-07-12 14:03:04.718566 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.50s 2025-07-12 14:03:04.718581 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 6.80s 2025-07-12 14:03:04.718589 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.36s 2025-07-12 14:03:04.718596 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 5.27s 2025-07-12 14:03:04.718604 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.40s 2025-07-12 14:03:04.718612 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 4.34s 2025-07-12 14:03:04.718623 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 4.30s 2025-07-12 14:03:04.718637 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 4.16s 2025-07-12 14:03:04.718650 | orchestrator | neutron : Copying over existing policy file ----------------------------- 4.09s 2025-07-12 14:03:04.718668 | orchestrator | neutron : Check neutron containers -------------------------------------- 4.04s 2025-07-12 14:03:04.718685 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 4.01s 2025-07-12 14:03:04.718698 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.78s 2025-07-12 14:03:04.718708 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.55s 2025-07-12 14:03:04.718719 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.51s 2025-07-12 14:03:04.718728 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.41s 2025-07-12 14:03:04.718739 | orchestrator | neutron : Copying over fwaas_driver.ini --------------------------------- 3.39s 2025-07-12 14:03:04.718749 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.26s 2025-07-12 14:03:04.718761 | orchestrator | 2025-07-12 14:03:04 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:03:04.718774 | orchestrator | 2025-07-12 14:03:04 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:03:04.718785 | orchestrator | 2025-07-12 14:03:04 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:03:04.718797 | orchestrator | 2025-07-12 14:03:04 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:03:04.718808 | orchestrator | 2025-07-12 14:03:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:07.748417 | orchestrator | 2025-07-12 14:03:07 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:03:07.748797 | orchestrator | 2025-07-12 14:03:07 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:03:07.749812 | orchestrator | 2025-07-12 14:03:07 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:03:07.750296 | orchestrator | 2025-07-12 14:03:07 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:03:07.750352 | orchestrator | 2025-07-12 14:03:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:10.803579 | orchestrator | 2025-07-12 14:03:10 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:03:10.806214 | orchestrator | 2025-07-12 14:03:10 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:03:10.808337 | orchestrator | 2025-07-12 14:03:10 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:03:10.810940 | orchestrator | 2025-07-12 14:03:10 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:03:10.810999 | orchestrator | 2025-07-12 14:03:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:13.848422 | orchestrator | 2025-07-12 14:03:13 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:03:13.848535 | orchestrator | 2025-07-12 14:03:13 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:03:13.849682 | orchestrator | 2025-07-12 14:03:13 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:03:13.850157 | orchestrator | 2025-07-12 14:03:13 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:03:13.850194 | orchestrator | 2025-07-12 14:03:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:16.878845 | orchestrator | 2025-07-12 14:03:16 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:03:16.878955 | orchestrator | 2025-07-12 14:03:16 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:03:16.879838 | orchestrator | 2025-07-12 14:03:16 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:03:16.879862 | orchestrator | 2025-07-12 14:03:16 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:03:16.879874 | orchestrator | 2025-07-12 14:03:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:19.911714 | orchestrator | 2025-07-12 14:03:19 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:03:19.913060 | orchestrator | 2025-07-12 14:03:19 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:03:19.919839 | orchestrator | 2025-07-12 14:03:19 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:03:19.920658 | orchestrator | 2025-07-12 14:03:19 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:03:19.920686 | orchestrator | 2025-07-12 14:03:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:22.952984 | orchestrator | 2025-07-12 14:03:22 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:03:22.953112 | orchestrator | 2025-07-12 14:03:22 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:03:22.958527 | orchestrator | 2025-07-12 14:03:22 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:03:22.958559 | orchestrator | 2025-07-12 14:03:22 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:03:22.958571 | orchestrator | 2025-07-12 14:03:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:25.988030 | orchestrator | 2025-07-12 14:03:25 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:03:25.988155 | orchestrator | 2025-07-12 14:03:25 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:03:25.988858 | orchestrator | 2025-07-12 14:03:25 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:03:25.989526 | orchestrator | 2025-07-12 14:03:25 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:03:25.989550 | orchestrator | 2025-07-12 14:03:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:29.028531 | orchestrator | 2025-07-12 14:03:29 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:03:29.028652 | orchestrator | 2025-07-12 14:03:29 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:03:29.028672 | orchestrator | 2025-07-12 14:03:29 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:03:29.028720 | orchestrator | 2025-07-12 14:03:29 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:03:29.028734 | orchestrator | 2025-07-12 14:03:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:32.055104 | orchestrator | 2025-07-12 14:03:32 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:03:32.055340 | orchestrator | 2025-07-12 14:03:32 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:03:32.057853 | orchestrator | 2025-07-12 14:03:32 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:03:32.057921 | orchestrator | 2025-07-12 14:03:32 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:03:32.057946 | orchestrator | 2025-07-12 14:03:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:35.091967 | orchestrator | 2025-07-12 14:03:35 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:03:35.092198 | orchestrator | 2025-07-12 14:03:35 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:03:35.093858 | orchestrator | 2025-07-12 14:03:35 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:03:35.094523 | orchestrator | 2025-07-12 14:03:35 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:03:35.094550 | orchestrator | 2025-07-12 14:03:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:38.127027 | orchestrator | 2025-07-12 14:03:38 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:03:38.129198 | orchestrator | 2025-07-12 14:03:38 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:03:38.130810 | orchestrator | 2025-07-12 14:03:38 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:03:38.133064 | orchestrator | 2025-07-12 14:03:38 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:03:38.133259 | orchestrator | 2025-07-12 14:03:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:41.178423 | orchestrator | 2025-07-12 14:03:41 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:03:41.180088 | orchestrator | 2025-07-12 14:03:41 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:03:41.180996 | orchestrator | 2025-07-12 14:03:41 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:03:41.182558 | orchestrator | 2025-07-12 14:03:41 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:03:41.182585 | orchestrator | 2025-07-12 14:03:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:44.236731 | orchestrator | 2025-07-12 14:03:44 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:03:44.237228 | orchestrator | 2025-07-12 14:03:44 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:03:44.238250 | orchestrator | 2025-07-12 14:03:44 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:03:44.239022 | orchestrator | 2025-07-12 14:03:44 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:03:44.239046 | orchestrator | 2025-07-12 14:03:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:47.295020 | orchestrator | 2025-07-12 14:03:47 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:03:47.298579 | orchestrator | 2025-07-12 14:03:47 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:03:47.299747 | orchestrator | 2025-07-12 14:03:47 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:03:47.301585 | orchestrator | 2025-07-12 14:03:47 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:03:47.301905 | orchestrator | 2025-07-12 14:03:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:50.346759 | orchestrator | 2025-07-12 14:03:50 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state STARTED 2025-07-12 14:03:50.347977 | orchestrator | 2025-07-12 14:03:50 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:03:50.349040 | orchestrator | 2025-07-12 14:03:50 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:03:50.350556 | orchestrator | 2025-07-12 14:03:50 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:03:50.350580 | orchestrator | 2025-07-12 14:03:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:53.399140 | orchestrator | 2025-07-12 14:03:53 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:03:53.405494 | orchestrator | 2025-07-12 14:03:53 | INFO  | Task a25ae6e3-db47-496a-b358-b1c12cf33699 is in state SUCCESS 2025-07-12 14:03:53.405593 | orchestrator | 2025-07-12 14:03:53.407685 | orchestrator | 2025-07-12 14:03:53.407759 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 14:03:53.407771 | orchestrator | 2025-07-12 14:03:53.407780 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 14:03:53.407788 | orchestrator | Saturday 12 July 2025 14:00:21 +0000 (0:00:00.542) 0:00:00.542 ********* 2025-07-12 14:03:53.407796 | orchestrator | ok: [testbed-manager] 2025-07-12 14:03:53.407806 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:03:53.407814 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:03:53.407821 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:03:53.407829 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:03:53.407837 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:03:53.407845 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:03:53.407852 | orchestrator | 2025-07-12 14:03:53.407860 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 14:03:53.407868 | orchestrator | Saturday 12 July 2025 14:00:22 +0000 (0:00:00.912) 0:00:01.454 ********* 2025-07-12 14:03:53.407876 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-07-12 14:03:53.407899 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-07-12 14:03:53.407908 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-07-12 14:03:53.407916 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-07-12 14:03:53.407923 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-07-12 14:03:53.407931 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-07-12 14:03:53.407939 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-07-12 14:03:53.407947 | orchestrator | 2025-07-12 14:03:53.407954 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-07-12 14:03:53.407962 | orchestrator | 2025-07-12 14:03:53.408107 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-07-12 14:03:53.408117 | orchestrator | Saturday 12 July 2025 14:00:22 +0000 (0:00:00.806) 0:00:02.261 ********* 2025-07-12 14:03:53.408127 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 14:03:53.408136 | orchestrator | 2025-07-12 14:03:53.408144 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-07-12 14:03:53.408174 | orchestrator | Saturday 12 July 2025 14:00:24 +0000 (0:00:01.533) 0:00:03.794 ********* 2025-07-12 14:03:53.408185 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 14:03:53.408197 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:53.408207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:53.408230 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.408239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:53.408253 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:53.408262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.408281 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 14:03:53.408291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.408300 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.408373 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.408383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.408455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:53.408466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.408745 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.408760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.408768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.408777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.408792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.408800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.408814 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:53.408829 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:53.408837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.408845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.408853 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.408862 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.408876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.408888 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.408902 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.408910 | orchestrator | 2025-07-12 14:03:53.408919 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-07-12 14:03:53.408927 | orchestrator | Saturday 12 July 2025 14:00:28 +0000 (0:00:04.102) 0:00:07.896 ********* 2025-07-12 14:03:53.408935 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 14:03:53.408943 | orchestrator | 2025-07-12 14:03:53.408951 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-07-12 14:03:53.408959 | orchestrator | Saturday 12 July 2025 14:00:30 +0000 (0:00:02.216) 0:00:10.112 ********* 2025-07-12 14:03:53.408968 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 14:03:53.408977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:53.408985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:53.408999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:53.409012 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:53.409026 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:53.409034 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:53.409042 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:53.409050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.409058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.409067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.409080 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.409097 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.409106 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.409114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.409122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.409130 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.409138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.409151 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 14:03:53.409174 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.409182 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.409191 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.409199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.409207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.409215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.409227 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.409240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.409252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.409260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.409268 | orchestrator | 2025-07-12 14:03:53.409277 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-07-12 14:03:53.409285 | orchestrator | Saturday 12 July 2025 14:00:36 +0000 (0:00:06.052) 0:00:16.165 ********* 2025-07-12 14:03:53.409876 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-12 14:03:53.409889 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 14:03:53.409897 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 14:03:53.409943 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-12 14:03:53.409954 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:53.409963 | orchestrator | skipping: [testbed-manager] 2025-07-12 14:03:53.409972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 14:03:53.409980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:53.409988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:53.409996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 14:03:53.410011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:53.410086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 14:03:53.410101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:53.410109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:53.410118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 14:03:53.410126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:53.410135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 14:03:53.410143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:53.410157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:53.410183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 14:03:53.410196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:53.410205 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:53.410281 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:53.410292 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:53.410301 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 14:03:53.410330 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 14:03:53.410339 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 14:03:53.410346 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:53.410354 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 14:03:53.410370 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 14:03:53.410400 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 14:03:53.410409 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:53.410422 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 14:03:53.410430 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 14:03:53.410707 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 14:03:53.410719 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:53.410727 | orchestrator | 2025-07-12 14:03:53.410737 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-07-12 14:03:53.410746 | orchestrator | Saturday 12 July 2025 14:00:38 +0000 (0:00:02.040) 0:00:18.205 ********* 2025-07-12 14:03:53.410825 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-12 14:03:53.410846 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 14:03:53.410855 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 14:03:53.410890 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-12 14:03:53.410900 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:53.410908 | orchestrator | skipping: [testbed-manager] 2025-07-12 14:03:53.410916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 14:03:53.410925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:53.410938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:53.410946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 14:03:53.410995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 14:03:53.411006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:53.411022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:53.411030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:53.411039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 14:03:53.411093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:53.411103 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:53.411111 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:53.411120 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 14:03:53.411128 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 14:03:53.411156 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 14:03:53.411165 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:53.411178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 14:03:53.411186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:53.411194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:53.411208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 14:03:53.411216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:53.411224 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:53.411406 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 14:03:53.411439 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 14:03:53.411448 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 14:03:53.411457 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:53.411469 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 14:03:53.411478 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 14:03:53.411492 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 14:03:53.411500 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:53.411509 | orchestrator | 2025-07-12 14:03:53.411517 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-07-12 14:03:53.411525 | orchestrator | Saturday 12 July 2025 14:00:41 +0000 (0:00:02.386) 0:00:20.592 ********* 2025-07-12 14:03:53.411533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:53.411542 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 14:03:53.411571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:53.411580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:53.411589 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:53.411597 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:53.411611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.411619 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:53.411627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.411661 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.411693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.411706 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.411714 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:53.411728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.411736 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.411746 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 14:03:53.411755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.411784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.411798 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.411812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.411820 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.411828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.411836 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.411844 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.411872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.411882 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.411894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.411909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.411917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.411925 | orchestrator | 2025-07-12 14:03:53.411933 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-07-12 14:03:53.411942 | orchestrator | Saturday 12 July 2025 14:00:48 +0000 (0:00:06.989) 0:00:27.581 ********* 2025-07-12 14:03:53.411950 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 14:03:53.411958 | orchestrator | 2025-07-12 14:03:53.411966 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-07-12 14:03:53.411974 | orchestrator | Saturday 12 July 2025 14:00:49 +0000 (0:00:00.961) 0:00:28.543 ********* 2025-07-12 14:03:53.411982 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078351, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.381798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.411990 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1078335, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.373798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412019 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078351, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.381798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412032 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078351, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.381798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412050 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078351, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.381798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412059 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078351, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.381798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:53.412068 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1078304, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.360798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412078 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078351, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.381798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412087 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078351, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.381798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412159 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1078335, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.373798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412181 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1078335, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.373798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412190 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1078335, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.373798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412201 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1078335, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.373798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412210 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1078306, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.360798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412219 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1078335, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.373798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412229 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1078304, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.360798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412262 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1078304, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.360798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412282 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1078304, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.360798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412291 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1078335, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.373798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:53.412301 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1078304, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.360798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412328 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1078306, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.360798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412337 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1078330, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3707979, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412346 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1078306, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.360798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412380 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1078306, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.360798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412395 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1078304, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.360798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412407 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1078306, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.360798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412416 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1078330, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3707979, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412424 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1078330, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3707979, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412432 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1078330, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3707979, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412440 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1078306, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.360798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412476 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1078304, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.360798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:53.412486 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1078311, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.363798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412498 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1078330, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3707979, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412506 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1078311, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.363798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412514 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1078311, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.363798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412522 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1078330, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3707979, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412530 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1078327, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3707979, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412566 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1078327, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3707979, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412576 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1078311, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.363798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412588 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1078311, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.363798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412596 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1078306, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.360798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:53.412604 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1078338, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.374798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412613 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1078327, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3707979, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412621 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1078338, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.374798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412634 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1078311, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.363798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412664 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1078327, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3707979, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412677 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1078327, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3707979, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412686 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1078344, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3797982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412694 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1078338, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.374798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412703 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1078344, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3797982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412711 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1078327, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3707979, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412725 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1078330, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3707979, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:53.412754 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1078338, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.374798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412768 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1078338, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.374798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412776 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1078344, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3797982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412784 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1078361, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3827982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412793 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1078361, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3827982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412809 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1078338, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.374798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412817 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1078344, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3797982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412847 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1078361, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3827982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412866 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1078344, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3797982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412882 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1078344, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3797982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412896 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1078361, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3827982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412915 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1078340, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.375798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.412946 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1078311, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.363798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:53.412960 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1078340, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.375798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413012 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1078361, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3827982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413036 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1078340, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.375798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413050 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1078340, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.375798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413064 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078308, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3617978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413077 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1078361, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3827982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413103 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078308, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3617978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413117 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078308, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3617978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413164 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078308, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3617978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413184 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1078340, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.375798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413197 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1078327, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3707979, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:53.413212 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1078325, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.369798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413226 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1078340, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.375798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413246 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078302, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.359798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413255 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1078325, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.369798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413268 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1078325, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.369798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413281 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1078325, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.369798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413289 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078308, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3617978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413297 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078308, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3617978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413364 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078302, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.359798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413375 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1078332, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.372798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413383 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078302, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.359798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413399 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1078332, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.372798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413411 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1078360, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3827982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413420 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1078338, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.374798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:53.413428 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1078325, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.369798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413441 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078302, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.359798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413450 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1078325, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.369798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413458 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1078323, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.368798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413470 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078302, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.359798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413482 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1078332, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.372798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413490 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078302, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.359798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413498 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1078360, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3827982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413512 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1078332, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.372798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413520 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1078332, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.372798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413528 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1078355, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.381798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413536 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:53.413549 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1078360, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3827982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413562 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1078344, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3797982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:53.413570 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1078332, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.372798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413675 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1078360, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3827982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413685 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1078323, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.368798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413693 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1078360, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3827982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413702 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1078323, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.368798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413716 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1078323, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.368798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413731 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1078355, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.381798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413783 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:53.413795 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1078360, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3827982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413810 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1078323, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.368798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413819 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1078355, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.381798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413827 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:53.413836 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1078355, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.381798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413845 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:53.413853 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1078361, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3827982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:53.413865 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1078323, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.368798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413875 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1078355, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.381798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413887 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:53.413895 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1078355, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.381798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:53.413902 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:53.413910 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1078340, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.375798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:53.413918 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078308, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3617978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:53.413925 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1078325, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.369798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:53.413933 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078302, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.359798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:53.413945 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1078332, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.372798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:53.413955 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1078360, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3827982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:53.413967 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1078323, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.368798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:53.413974 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1078355, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.381798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:53.413982 | orchestrator | 2025-07-12 14:03:53.413989 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-07-12 14:03:53.413996 | orchestrator | Saturday 12 July 2025 14:01:14 +0000 (0:00:24.979) 0:00:53.523 ********* 2025-07-12 14:03:53.414004 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 14:03:53.414011 | orchestrator | 2025-07-12 14:03:53.414054 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-07-12 14:03:53.414061 | orchestrator | Saturday 12 July 2025 14:01:14 +0000 (0:00:00.634) 0:00:54.157 ********* 2025-07-12 14:03:53.414068 | orchestrator | [WARNING]: Skipped 2025-07-12 14:03:53.414076 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 14:03:53.414083 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-07-12 14:03:53.414090 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 14:03:53.414096 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-07-12 14:03:53.414103 | orchestrator | [WARNING]: Skipped 2025-07-12 14:03:53.414110 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 14:03:53.414117 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-07-12 14:03:53.414124 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 14:03:53.414131 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-07-12 14:03:53.414138 | orchestrator | [WARNING]: Skipped 2025-07-12 14:03:53.414145 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 14:03:53.414152 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-07-12 14:03:53.414158 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 14:03:53.414165 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-07-12 14:03:53.414172 | orchestrator | [WARNING]: Skipped 2025-07-12 14:03:53.414178 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 14:03:53.414185 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-07-12 14:03:53.414191 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 14:03:53.414198 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-07-12 14:03:53.414205 | orchestrator | [WARNING]: Skipped 2025-07-12 14:03:53.414216 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 14:03:53.414223 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-07-12 14:03:53.414234 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 14:03:53.414241 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-07-12 14:03:53.414248 | orchestrator | [WARNING]: Skipped 2025-07-12 14:03:53.414255 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 14:03:53.414261 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-07-12 14:03:53.414268 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 14:03:53.414274 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-07-12 14:03:53.414281 | orchestrator | [WARNING]: Skipped 2025-07-12 14:03:53.414287 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 14:03:53.414294 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-07-12 14:03:53.414300 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 14:03:53.414356 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-07-12 14:03:53.414364 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 14:03:53.414370 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 14:03:53.414377 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-12 14:03:53.414383 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-12 14:03:53.414390 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-12 14:03:53.414396 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-12 14:03:53.414403 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-12 14:03:53.414409 | orchestrator | 2025-07-12 14:03:53.414416 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-07-12 14:03:53.414424 | orchestrator | Saturday 12 July 2025 14:01:16 +0000 (0:00:01.392) 0:00:55.549 ********* 2025-07-12 14:03:53.414432 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-12 14:03:53.414440 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:53.414448 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-12 14:03:53.414456 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-12 14:03:53.414463 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:53.414471 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:53.414478 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-12 14:03:53.414486 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:53.414494 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-12 14:03:53.414501 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:53.414509 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-12 14:03:53.414516 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:53.414523 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-07-12 14:03:53.414532 | orchestrator | 2025-07-12 14:03:53.414539 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-07-12 14:03:53.414547 | orchestrator | Saturday 12 July 2025 14:01:33 +0000 (0:00:17.014) 0:01:12.564 ********* 2025-07-12 14:03:53.414554 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-12 14:03:53.414562 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:53.414569 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-12 14:03:53.414577 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:53.414589 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-12 14:03:53.414597 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:53.414604 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-12 14:03:53.414611 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:53.414618 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-12 14:03:53.414626 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:53.414633 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-12 14:03:53.414639 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:53.414646 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-07-12 14:03:53.414653 | orchestrator | 2025-07-12 14:03:53.414660 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-07-12 14:03:53.414668 | orchestrator | Saturday 12 July 2025 14:01:36 +0000 (0:00:03.721) 0:01:16.286 ********* 2025-07-12 14:03:53.414675 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-12 14:03:53.414682 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:53.414689 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-12 14:03:53.414696 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:53.414703 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-12 14:03:53.414710 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:53.414721 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-07-12 14:03:53.414728 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-12 14:03:53.414735 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:53.414742 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-12 14:03:53.414752 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:53.414762 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-12 14:03:53.414773 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:53.414783 | orchestrator | 2025-07-12 14:03:53.414793 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-07-12 14:03:53.414807 | orchestrator | Saturday 12 July 2025 14:01:38 +0000 (0:00:02.015) 0:01:18.301 ********* 2025-07-12 14:03:53.414818 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 14:03:53.414828 | orchestrator | 2025-07-12 14:03:53.414839 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-07-12 14:03:53.414850 | orchestrator | Saturday 12 July 2025 14:01:39 +0000 (0:00:00.709) 0:01:19.010 ********* 2025-07-12 14:03:53.414860 | orchestrator | skipping: [testbed-manager] 2025-07-12 14:03:53.414869 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:53.414878 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:53.414887 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:53.414897 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:53.414907 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:53.414917 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:53.414926 | orchestrator | 2025-07-12 14:03:53.414935 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-07-12 14:03:53.414944 | orchestrator | Saturday 12 July 2025 14:01:40 +0000 (0:00:00.832) 0:01:19.842 ********* 2025-07-12 14:03:53.414954 | orchestrator | skipping: [testbed-manager] 2025-07-12 14:03:53.414968 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:53.414977 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:53.414987 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:53.414997 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:03:53.415006 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:03:53.415015 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:03:53.415025 | orchestrator | 2025-07-12 14:03:53.415035 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-07-12 14:03:53.415045 | orchestrator | Saturday 12 July 2025 14:01:42 +0000 (0:00:02.021) 0:01:21.864 ********* 2025-07-12 14:03:53.415055 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 14:03:53.415065 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 14:03:53.415075 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 14:03:53.415086 | orchestrator | skipping: [testbed-manager] 2025-07-12 14:03:53.415096 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:53.415106 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:53.415115 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 14:03:53.415125 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:53.415136 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 14:03:53.415146 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:53.415156 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 14:03:53.415166 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:53.415177 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 14:03:53.415187 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:53.415198 | orchestrator | 2025-07-12 14:03:53.415209 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-07-12 14:03:53.415219 | orchestrator | Saturday 12 July 2025 14:01:43 +0000 (0:00:01.437) 0:01:23.301 ********* 2025-07-12 14:03:53.415229 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-12 14:03:53.415241 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:53.415252 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-12 14:03:53.415264 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:53.415275 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-12 14:03:53.415285 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:53.415296 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-12 14:03:53.415324 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:53.415335 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-12 14:03:53.415345 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:53.415355 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-07-12 14:03:53.415365 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-12 14:03:53.415375 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:53.415385 | orchestrator | 2025-07-12 14:03:53.415405 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-07-12 14:03:53.415416 | orchestrator | Saturday 12 July 2025 14:01:45 +0000 (0:00:01.431) 0:01:24.732 ********* 2025-07-12 14:03:53.415427 | orchestrator | [WARNING]: Skipped 2025-07-12 14:03:53.415439 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-07-12 14:03:53.415459 | orchestrator | due to this access issue: 2025-07-12 14:03:53.415470 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-07-12 14:03:53.415481 | orchestrator | not a directory 2025-07-12 14:03:53.415492 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 14:03:53.415503 | orchestrator | 2025-07-12 14:03:53.415514 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-07-12 14:03:53.415525 | orchestrator | Saturday 12 July 2025 14:01:46 +0000 (0:00:01.273) 0:01:26.006 ********* 2025-07-12 14:03:53.415535 | orchestrator | skipping: [testbed-manager] 2025-07-12 14:03:53.415545 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:53.415555 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:53.415574 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:53.415585 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:53.415596 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:53.415607 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:53.415618 | orchestrator | 2025-07-12 14:03:53.415629 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-07-12 14:03:53.415641 | orchestrator | Saturday 12 July 2025 14:01:47 +0000 (0:00:00.890) 0:01:26.897 ********* 2025-07-12 14:03:53.415651 | orchestrator | skipping: [testbed-manager] 2025-07-12 14:03:53.415662 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:53.415673 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:53.415683 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:53.415693 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:53.415704 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:53.415715 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:53.415725 | orchestrator | 2025-07-12 14:03:53.415736 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-07-12 14:03:53.415748 | orchestrator | Saturday 12 July 2025 14:01:48 +0000 (0:00:00.854) 0:01:27.751 ********* 2025-07-12 14:03:53.415760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:53.415773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:53.415785 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 14:03:53.415797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:53.415826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.415844 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:53.415857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.415868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.415880 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:53.415892 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:53.415903 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:53.415927 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.415945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.415962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.415974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.415985 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.415997 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.416010 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 14:03:53.416031 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.416049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.416065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.416078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.416089 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.416100 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.416111 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.416130 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 14:03:53.416147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.416159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.416175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:53.416186 | orchestrator | 2025-07-12 14:03:53.416198 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-07-12 14:03:53.416209 | orchestrator | Saturday 12 July 2025 14:01:52 +0000 (0:00:04.674) 0:01:32.426 ********* 2025-07-12 14:03:53.416220 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-07-12 14:03:53.416231 | orchestrator | skipping: [testbed-manager] 2025-07-12 14:03:53.416242 | orchestrator | 2025-07-12 14:03:53.416253 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 14:03:53.416263 | orchestrator | Saturday 12 July 2025 14:01:54 +0000 (0:00:01.589) 0:01:34.015 ********* 2025-07-12 14:03:53.416274 | orchestrator | 2025-07-12 14:03:53.416285 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 14:03:53.416296 | orchestrator | Saturday 12 July 2025 14:01:54 +0000 (0:00:00.364) 0:01:34.380 ********* 2025-07-12 14:03:53.416357 | orchestrator | 2025-07-12 14:03:53.416370 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 14:03:53.416382 | orchestrator | Saturday 12 July 2025 14:01:55 +0000 (0:00:00.159) 0:01:34.540 ********* 2025-07-12 14:03:53.416393 | orchestrator | 2025-07-12 14:03:53.416403 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 14:03:53.416415 | orchestrator | Saturday 12 July 2025 14:01:55 +0000 (0:00:00.201) 0:01:34.742 ********* 2025-07-12 14:03:53.416437 | orchestrator | 2025-07-12 14:03:53.416448 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 14:03:53.416458 | orchestrator | Saturday 12 July 2025 14:01:55 +0000 (0:00:00.214) 0:01:34.956 ********* 2025-07-12 14:03:53.416467 | orchestrator | 2025-07-12 14:03:53.416479 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 14:03:53.416489 | orchestrator | Saturday 12 July 2025 14:01:55 +0000 (0:00:00.164) 0:01:35.120 ********* 2025-07-12 14:03:53.416499 | orchestrator | 2025-07-12 14:03:53.416509 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 14:03:53.416519 | orchestrator | Saturday 12 July 2025 14:01:55 +0000 (0:00:00.111) 0:01:35.232 ********* 2025-07-12 14:03:53.416528 | orchestrator | 2025-07-12 14:03:53.416539 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-07-12 14:03:53.416549 | orchestrator | Saturday 12 July 2025 14:01:55 +0000 (0:00:00.163) 0:01:35.396 ********* 2025-07-12 14:03:53.416559 | orchestrator | changed: [testbed-manager] 2025-07-12 14:03:53.416567 | orchestrator | 2025-07-12 14:03:53.416576 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-07-12 14:03:53.416585 | orchestrator | Saturday 12 July 2025 14:02:31 +0000 (0:00:35.360) 0:02:10.756 ********* 2025-07-12 14:03:53.416594 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:03:53.416602 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:03:53.416611 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:03:53.416619 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:03:53.416628 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:03:53.416637 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:03:53.416646 | orchestrator | changed: [testbed-manager] 2025-07-12 14:03:53.416655 | orchestrator | 2025-07-12 14:03:53.416664 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-07-12 14:03:53.416674 | orchestrator | Saturday 12 July 2025 14:02:45 +0000 (0:00:14.572) 0:02:25.328 ********* 2025-07-12 14:03:53.416680 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:03:53.416685 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:03:53.416690 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:03:53.416696 | orchestrator | 2025-07-12 14:03:53.416701 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-07-12 14:03:53.416707 | orchestrator | Saturday 12 July 2025 14:02:56 +0000 (0:00:10.191) 0:02:35.520 ********* 2025-07-12 14:03:53.416712 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:03:53.416717 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:03:53.416723 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:03:53.416728 | orchestrator | 2025-07-12 14:03:53.416733 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-07-12 14:03:53.416739 | orchestrator | Saturday 12 July 2025 14:03:03 +0000 (0:00:07.090) 0:02:42.611 ********* 2025-07-12 14:03:53.416744 | orchestrator | changed: [testbed-manager] 2025-07-12 14:03:53.416750 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:03:53.416755 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:03:53.416767 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:03:53.416773 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:03:53.416779 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:03:53.416784 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:03:53.416789 | orchestrator | 2025-07-12 14:03:53.416795 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-07-12 14:03:53.416800 | orchestrator | Saturday 12 July 2025 14:03:19 +0000 (0:00:16.770) 0:02:59.382 ********* 2025-07-12 14:03:53.416806 | orchestrator | changed: [testbed-manager] 2025-07-12 14:03:53.416811 | orchestrator | 2025-07-12 14:03:53.416816 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-07-12 14:03:53.416821 | orchestrator | Saturday 12 July 2025 14:03:27 +0000 (0:00:07.999) 0:03:07.381 ********* 2025-07-12 14:03:53.416827 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:03:53.416832 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:03:53.416844 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:03:53.416849 | orchestrator | 2025-07-12 14:03:53.416855 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-07-12 14:03:53.416865 | orchestrator | Saturday 12 July 2025 14:03:34 +0000 (0:00:06.369) 0:03:13.751 ********* 2025-07-12 14:03:53.416870 | orchestrator | changed: [testbed-manager] 2025-07-12 14:03:53.416876 | orchestrator | 2025-07-12 14:03:53.416881 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-07-12 14:03:53.416886 | orchestrator | Saturday 12 July 2025 14:03:39 +0000 (0:00:04.804) 0:03:18.555 ********* 2025-07-12 14:03:53.416891 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:03:53.416897 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:03:53.416902 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:03:53.416907 | orchestrator | 2025-07-12 14:03:53.416913 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:03:53.416922 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-12 14:03:53.416932 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-12 14:03:53.416941 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-12 14:03:53.416949 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-12 14:03:53.416959 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-12 14:03:53.416968 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-12 14:03:53.416976 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-12 14:03:53.416986 | orchestrator | 2025-07-12 14:03:53.416996 | orchestrator | 2025-07-12 14:03:53.417005 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:03:53.417013 | orchestrator | Saturday 12 July 2025 14:03:51 +0000 (0:00:12.106) 0:03:30.661 ********* 2025-07-12 14:03:53.417018 | orchestrator | =============================================================================== 2025-07-12 14:03:53.417024 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 35.36s 2025-07-12 14:03:53.417032 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 24.98s 2025-07-12 14:03:53.417041 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 17.01s 2025-07-12 14:03:53.417050 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 16.78s 2025-07-12 14:03:53.417059 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.57s 2025-07-12 14:03:53.417068 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 12.11s 2025-07-12 14:03:53.417078 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.19s 2025-07-12 14:03:53.417086 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.99s 2025-07-12 14:03:53.417094 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 7.09s 2025-07-12 14:03:53.417104 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.99s 2025-07-12 14:03:53.417113 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 6.37s 2025-07-12 14:03:53.417123 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.05s 2025-07-12 14:03:53.417132 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.80s 2025-07-12 14:03:53.417148 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.67s 2025-07-12 14:03:53.417157 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.10s 2025-07-12 14:03:53.417166 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.72s 2025-07-12 14:03:53.417175 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.39s 2025-07-12 14:03:53.417185 | orchestrator | prometheus : include_tasks ---------------------------------------------- 2.22s 2025-07-12 14:03:53.417194 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 2.04s 2025-07-12 14:03:53.417209 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.02s 2025-07-12 14:03:53.417218 | orchestrator | 2025-07-12 14:03:53 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:03:53.417227 | orchestrator | 2025-07-12 14:03:53 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:03:53.417237 | orchestrator | 2025-07-12 14:03:53 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:03:53.417246 | orchestrator | 2025-07-12 14:03:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:56.461668 | orchestrator | 2025-07-12 14:03:56 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:03:56.465049 | orchestrator | 2025-07-12 14:03:56 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:03:56.466845 | orchestrator | 2025-07-12 14:03:56 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:03:56.469175 | orchestrator | 2025-07-12 14:03:56 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:03:56.469433 | orchestrator | 2025-07-12 14:03:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:59.522127 | orchestrator | 2025-07-12 14:03:59 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:03:59.524798 | orchestrator | 2025-07-12 14:03:59 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:03:59.525570 | orchestrator | 2025-07-12 14:03:59 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:03:59.527224 | orchestrator | 2025-07-12 14:03:59 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:03:59.527250 | orchestrator | 2025-07-12 14:03:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:02.574218 | orchestrator | 2025-07-12 14:04:02 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:04:02.575228 | orchestrator | 2025-07-12 14:04:02 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:04:02.577676 | orchestrator | 2025-07-12 14:04:02 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:04:02.580656 | orchestrator | 2025-07-12 14:04:02 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:04:02.580749 | orchestrator | 2025-07-12 14:04:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:05.624991 | orchestrator | 2025-07-12 14:04:05 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:04:05.625631 | orchestrator | 2025-07-12 14:04:05 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:04:05.628060 | orchestrator | 2025-07-12 14:04:05 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:04:05.629911 | orchestrator | 2025-07-12 14:04:05 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:04:05.629998 | orchestrator | 2025-07-12 14:04:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:08.673771 | orchestrator | 2025-07-12 14:04:08 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:04:08.675271 | orchestrator | 2025-07-12 14:04:08 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:04:08.677551 | orchestrator | 2025-07-12 14:04:08 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:04:08.679123 | orchestrator | 2025-07-12 14:04:08 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:04:08.679155 | orchestrator | 2025-07-12 14:04:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:11.720670 | orchestrator | 2025-07-12 14:04:11 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:04:11.722810 | orchestrator | 2025-07-12 14:04:11 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:04:11.725171 | orchestrator | 2025-07-12 14:04:11 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:04:11.726725 | orchestrator | 2025-07-12 14:04:11 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:04:11.726749 | orchestrator | 2025-07-12 14:04:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:14.773495 | orchestrator | 2025-07-12 14:04:14 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:04:14.775955 | orchestrator | 2025-07-12 14:04:14 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:04:14.777173 | orchestrator | 2025-07-12 14:04:14 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:04:14.778558 | orchestrator | 2025-07-12 14:04:14 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:04:14.778605 | orchestrator | 2025-07-12 14:04:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:17.823081 | orchestrator | 2025-07-12 14:04:17 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:04:17.824375 | orchestrator | 2025-07-12 14:04:17 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:04:17.826220 | orchestrator | 2025-07-12 14:04:17 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:04:17.827893 | orchestrator | 2025-07-12 14:04:17 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:04:17.827979 | orchestrator | 2025-07-12 14:04:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:20.871580 | orchestrator | 2025-07-12 14:04:20 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:04:20.873379 | orchestrator | 2025-07-12 14:04:20 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:04:20.874295 | orchestrator | 2025-07-12 14:04:20 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:04:20.876066 | orchestrator | 2025-07-12 14:04:20 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:04:20.876091 | orchestrator | 2025-07-12 14:04:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:23.928122 | orchestrator | 2025-07-12 14:04:23 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:04:23.930992 | orchestrator | 2025-07-12 14:04:23 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:04:23.933155 | orchestrator | 2025-07-12 14:04:23 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:04:23.935643 | orchestrator | 2025-07-12 14:04:23 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:04:23.935742 | orchestrator | 2025-07-12 14:04:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:26.976571 | orchestrator | 2025-07-12 14:04:26 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:04:26.978232 | orchestrator | 2025-07-12 14:04:26 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:04:26.978267 | orchestrator | 2025-07-12 14:04:26 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:04:26.979269 | orchestrator | 2025-07-12 14:04:26 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:04:26.979292 | orchestrator | 2025-07-12 14:04:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:30.016521 | orchestrator | 2025-07-12 14:04:30 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:04:30.017489 | orchestrator | 2025-07-12 14:04:30 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:04:30.028637 | orchestrator | 2025-07-12 14:04:30 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:04:30.028697 | orchestrator | 2025-07-12 14:04:30 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:04:30.028710 | orchestrator | 2025-07-12 14:04:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:33.062756 | orchestrator | 2025-07-12 14:04:33 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:04:33.064990 | orchestrator | 2025-07-12 14:04:33 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:04:33.067865 | orchestrator | 2025-07-12 14:04:33 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:04:33.070483 | orchestrator | 2025-07-12 14:04:33 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:04:33.070821 | orchestrator | 2025-07-12 14:04:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:36.102849 | orchestrator | 2025-07-12 14:04:36 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:04:36.107644 | orchestrator | 2025-07-12 14:04:36 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:04:36.108483 | orchestrator | 2025-07-12 14:04:36 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:04:36.110282 | orchestrator | 2025-07-12 14:04:36 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:04:36.110333 | orchestrator | 2025-07-12 14:04:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:39.154172 | orchestrator | 2025-07-12 14:04:39 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:04:39.154272 | orchestrator | 2025-07-12 14:04:39 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:04:39.154610 | orchestrator | 2025-07-12 14:04:39 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:04:39.156828 | orchestrator | 2025-07-12 14:04:39 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:04:39.156862 | orchestrator | 2025-07-12 14:04:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:42.190281 | orchestrator | 2025-07-12 14:04:42 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:04:42.190925 | orchestrator | 2025-07-12 14:04:42 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:04:42.191424 | orchestrator | 2025-07-12 14:04:42 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:04:42.191911 | orchestrator | 2025-07-12 14:04:42 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:04:42.191932 | orchestrator | 2025-07-12 14:04:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:45.233846 | orchestrator | 2025-07-12 14:04:45 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:04:45.235563 | orchestrator | 2025-07-12 14:04:45 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:04:45.237833 | orchestrator | 2025-07-12 14:04:45 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:04:45.242472 | orchestrator | 2025-07-12 14:04:45 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:04:45.242515 | orchestrator | 2025-07-12 14:04:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:48.296796 | orchestrator | 2025-07-12 14:04:48 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:04:48.298959 | orchestrator | 2025-07-12 14:04:48 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:04:48.301163 | orchestrator | 2025-07-12 14:04:48 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:04:48.304231 | orchestrator | 2025-07-12 14:04:48 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:04:48.304839 | orchestrator | 2025-07-12 14:04:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:51.350772 | orchestrator | 2025-07-12 14:04:51 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:04:51.351152 | orchestrator | 2025-07-12 14:04:51 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:04:51.352514 | orchestrator | 2025-07-12 14:04:51 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:04:51.355200 | orchestrator | 2025-07-12 14:04:51 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:04:51.355648 | orchestrator | 2025-07-12 14:04:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:54.399765 | orchestrator | 2025-07-12 14:04:54 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:04:54.402175 | orchestrator | 2025-07-12 14:04:54 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:04:54.404529 | orchestrator | 2025-07-12 14:04:54 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:04:54.406818 | orchestrator | 2025-07-12 14:04:54 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:04:54.406843 | orchestrator | 2025-07-12 14:04:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:57.450609 | orchestrator | 2025-07-12 14:04:57 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:04:57.451978 | orchestrator | 2025-07-12 14:04:57 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:04:57.453871 | orchestrator | 2025-07-12 14:04:57 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:04:57.455309 | orchestrator | 2025-07-12 14:04:57 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:04:57.455448 | orchestrator | 2025-07-12 14:04:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:00.499364 | orchestrator | 2025-07-12 14:05:00 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:05:00.499676 | orchestrator | 2025-07-12 14:05:00 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:05:00.501602 | orchestrator | 2025-07-12 14:05:00 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:05:00.503074 | orchestrator | 2025-07-12 14:05:00 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state STARTED 2025-07-12 14:05:00.503111 | orchestrator | 2025-07-12 14:05:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:03.555557 | orchestrator | 2025-07-12 14:05:03 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:05:03.558324 | orchestrator | 2025-07-12 14:05:03 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:05:03.561827 | orchestrator | 2025-07-12 14:05:03 | INFO  | Task 37ed18e3-d297-4532-8568-7cf9eb8dad37 is in state STARTED 2025-07-12 14:05:03.564353 | orchestrator | 2025-07-12 14:05:03 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:05:03.566693 | orchestrator | 2025-07-12 14:05:03 | INFO  | Task 29082a66-c2ea-4bd3-b778-7dd233ba03ae is in state SUCCESS 2025-07-12 14:05:03.566863 | orchestrator | 2025-07-12 14:05:03.569745 | orchestrator | 2025-07-12 14:05:03.569785 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 14:05:03.569798 | orchestrator | 2025-07-12 14:05:03.569810 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 14:05:03.569821 | orchestrator | Saturday 12 July 2025 14:02:05 +0000 (0:00:00.337) 0:00:00.337 ********* 2025-07-12 14:05:03.569832 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:05:03.569844 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:05:03.569855 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:05:03.569866 | orchestrator | 2025-07-12 14:05:03.569877 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 14:05:03.569888 | orchestrator | Saturday 12 July 2025 14:02:06 +0000 (0:00:00.311) 0:00:00.649 ********* 2025-07-12 14:05:03.569899 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-07-12 14:05:03.569910 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-07-12 14:05:03.569921 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-07-12 14:05:03.569932 | orchestrator | 2025-07-12 14:05:03.569942 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-07-12 14:05:03.569953 | orchestrator | 2025-07-12 14:05:03.569964 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-12 14:05:03.569975 | orchestrator | Saturday 12 July 2025 14:02:06 +0000 (0:00:00.367) 0:00:01.016 ********* 2025-07-12 14:05:03.569986 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:05:03.569998 | orchestrator | 2025-07-12 14:05:03.570008 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-07-12 14:05:03.570067 | orchestrator | Saturday 12 July 2025 14:02:07 +0000 (0:00:00.497) 0:00:01.514 ********* 2025-07-12 14:05:03.570082 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-07-12 14:05:03.570093 | orchestrator | 2025-07-12 14:05:03.570104 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-07-12 14:05:03.570114 | orchestrator | Saturday 12 July 2025 14:02:10 +0000 (0:00:03.408) 0:00:04.922 ********* 2025-07-12 14:05:03.570126 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-07-12 14:05:03.570137 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-07-12 14:05:03.570171 | orchestrator | 2025-07-12 14:05:03.570183 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-07-12 14:05:03.570193 | orchestrator | Saturday 12 July 2025 14:02:16 +0000 (0:00:06.161) 0:00:11.084 ********* 2025-07-12 14:05:03.570204 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 14:05:03.570215 | orchestrator | 2025-07-12 14:05:03.570226 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-07-12 14:05:03.570237 | orchestrator | Saturday 12 July 2025 14:02:19 +0000 (0:00:03.169) 0:00:14.254 ********* 2025-07-12 14:05:03.570248 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 14:05:03.570258 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-07-12 14:05:03.570270 | orchestrator | 2025-07-12 14:05:03.570281 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-07-12 14:05:03.570335 | orchestrator | Saturday 12 July 2025 14:02:23 +0000 (0:00:03.813) 0:00:18.067 ********* 2025-07-12 14:05:03.570348 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 14:05:03.570359 | orchestrator | 2025-07-12 14:05:03.570372 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-07-12 14:05:03.570385 | orchestrator | Saturday 12 July 2025 14:02:26 +0000 (0:00:03.164) 0:00:21.232 ********* 2025-07-12 14:05:03.570397 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-07-12 14:05:03.570410 | orchestrator | 2025-07-12 14:05:03.570422 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-07-12 14:05:03.570434 | orchestrator | Saturday 12 July 2025 14:02:30 +0000 (0:00:03.886) 0:00:25.118 ********* 2025-07-12 14:05:03.570483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 14:05:03.570504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 14:05:03.570534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 14:05:03.570549 | orchestrator | 2025-07-12 14:05:03.570563 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-12 14:05:03.570575 | orchestrator | Saturday 12 July 2025 14:02:38 +0000 (0:00:07.985) 0:00:33.104 ********* 2025-07-12 14:05:03.570596 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:05:03.570609 | orchestrator | 2025-07-12 14:05:03.570622 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-07-12 14:05:03.570635 | orchestrator | Saturday 12 July 2025 14:02:39 +0000 (0:00:00.637) 0:00:33.741 ********* 2025-07-12 14:05:03.570648 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:05:03.570659 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:05:03.570669 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:05:03.570680 | orchestrator | 2025-07-12 14:05:03.570690 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-07-12 14:05:03.570701 | orchestrator | Saturday 12 July 2025 14:02:43 +0000 (0:00:04.150) 0:00:37.892 ********* 2025-07-12 14:05:03.570712 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 14:05:03.570731 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 14:05:03.570742 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 14:05:03.570752 | orchestrator | 2025-07-12 14:05:03.570763 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-07-12 14:05:03.570774 | orchestrator | Saturday 12 July 2025 14:02:45 +0000 (0:00:01.505) 0:00:39.397 ********* 2025-07-12 14:05:03.570785 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 14:05:03.570796 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 14:05:03.570806 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 14:05:03.570817 | orchestrator | 2025-07-12 14:05:03.570828 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-07-12 14:05:03.570838 | orchestrator | Saturday 12 July 2025 14:02:46 +0000 (0:00:01.068) 0:00:40.465 ********* 2025-07-12 14:05:03.570849 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:05:03.570859 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:05:03.570870 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:05:03.570881 | orchestrator | 2025-07-12 14:05:03.570891 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-07-12 14:05:03.570902 | orchestrator | Saturday 12 July 2025 14:02:47 +0000 (0:00:00.977) 0:00:41.443 ********* 2025-07-12 14:05:03.570912 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:05:03.570923 | orchestrator | 2025-07-12 14:05:03.570933 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-07-12 14:05:03.570944 | orchestrator | Saturday 12 July 2025 14:02:47 +0000 (0:00:00.172) 0:00:41.615 ********* 2025-07-12 14:05:03.570955 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:05:03.570965 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:05:03.570976 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:05:03.570986 | orchestrator | 2025-07-12 14:05:03.570997 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-12 14:05:03.571007 | orchestrator | Saturday 12 July 2025 14:02:47 +0000 (0:00:00.372) 0:00:41.988 ********* 2025-07-12 14:05:03.571018 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:05:03.571030 | orchestrator | 2025-07-12 14:05:03.571049 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-07-12 14:05:03.571067 | orchestrator | Saturday 12 July 2025 14:02:48 +0000 (0:00:00.556) 0:00:42.545 ********* 2025-07-12 14:05:03.571104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 14:05:03.571138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 14:05:03.571166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 14:05:03.571184 | orchestrator | 2025-07-12 14:05:03.571204 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-07-12 14:05:03.571234 | orchestrator | Saturday 12 July 2025 14:02:52 +0000 (0:00:04.222) 0:00:46.768 ********* 2025-07-12 14:05:03.571266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 14:05:03.571312 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:05:03.571335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 14:05:03.571356 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:05:03.571394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 14:05:03.571416 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:05:03.571427 | orchestrator | 2025-07-12 14:05:03.571438 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-07-12 14:05:03.571449 | orchestrator | Saturday 12 July 2025 14:02:55 +0000 (0:00:03.059) 0:00:49.827 ********* 2025-07-12 14:05:03.571461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 14:05:03.571472 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:05:03.571495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 14:05:03.571515 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:05:03.571527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 14:05:03.571538 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:05:03.571549 | orchestrator | 2025-07-12 14:05:03.571560 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-07-12 14:05:03.571570 | orchestrator | Saturday 12 July 2025 14:03:00 +0000 (0:00:04.736) 0:00:54.564 ********* 2025-07-12 14:05:03.571581 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:05:03.571592 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:05:03.571603 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:05:03.571613 | orchestrator | 2025-07-12 14:05:03.571630 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-07-12 14:05:03.571648 | orchestrator | Saturday 12 July 2025 14:03:07 +0000 (0:00:07.414) 0:01:01.979 ********* 2025-07-12 14:05:03.571692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 14:05:03.571725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 14:05:03.571754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 14:05:03.571783 | orchestrator | 2025-07-12 14:05:03.571795 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-07-12 14:05:03.571805 | orchestrator | Saturday 12 July 2025 14:03:14 +0000 (0:00:06.808) 0:01:08.787 ********* 2025-07-12 14:05:03.571815 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:05:03.571826 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:05:03.571837 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:05:03.571847 | orchestrator | 2025-07-12 14:05:03.571858 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-07-12 14:05:03.571877 | orchestrator | Saturday 12 July 2025 14:03:23 +00002025-07-12 14:05:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:03.571889 | orchestrator | (0:00:08.659) 0:01:17.447 ********* 2025-07-12 14:05:03.571900 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:05:03.571910 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:05:03.571920 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:05:03.571931 | orchestrator | 2025-07-12 14:05:03.571941 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-07-12 14:05:03.571952 | orchestrator | Saturday 12 July 2025 14:03:29 +0000 (0:00:06.763) 0:01:24.211 ********* 2025-07-12 14:05:03.571962 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:05:03.571973 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:05:03.571983 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:05:03.571993 | orchestrator | 2025-07-12 14:05:03.572004 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-07-12 14:05:03.572014 | orchestrator | Saturday 12 July 2025 14:03:34 +0000 (0:00:04.546) 0:01:28.757 ********* 2025-07-12 14:05:03.572025 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:05:03.572035 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:05:03.572046 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:05:03.572056 | orchestrator | 2025-07-12 14:05:03.572067 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-07-12 14:05:03.572078 | orchestrator | Saturday 12 July 2025 14:03:37 +0000 (0:00:03.381) 0:01:32.138 ********* 2025-07-12 14:05:03.572088 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:05:03.572099 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:05:03.572109 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:05:03.572120 | orchestrator | 2025-07-12 14:05:03.572130 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-07-12 14:05:03.572141 | orchestrator | Saturday 12 July 2025 14:03:42 +0000 (0:00:05.172) 0:01:37.311 ********* 2025-07-12 14:05:03.572151 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:05:03.572162 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:05:03.572172 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:05:03.572182 | orchestrator | 2025-07-12 14:05:03.572193 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-07-12 14:05:03.572203 | orchestrator | Saturday 12 July 2025 14:03:43 +0000 (0:00:00.330) 0:01:37.642 ********* 2025-07-12 14:05:03.572214 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-07-12 14:05:03.572231 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:05:03.572242 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-07-12 14:05:03.572253 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:05:03.572263 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-07-12 14:05:03.572274 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:05:03.572284 | orchestrator | 2025-07-12 14:05:03.572385 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-07-12 14:05:03.572397 | orchestrator | Saturday 12 July 2025 14:03:46 +0000 (0:00:03.371) 0:01:41.013 ********* 2025-07-12 14:05:03.572414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 14:05:03.572438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 14:05:03.572458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 14:05:03.572470 | orchestrator | 2025-07-12 14:05:03.572481 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-12 14:05:03.572496 | orchestrator | Saturday 12 July 2025 14:03:50 +0000 (0:00:03.437) 0:01:44.450 ********* 2025-07-12 14:05:03.572507 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:05:03.572517 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:05:03.572528 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:05:03.572539 | orchestrator | 2025-07-12 14:05:03.572549 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-07-12 14:05:03.572560 | orchestrator | Saturday 12 July 2025 14:03:50 +0000 (0:00:00.288) 0:01:44.739 ********* 2025-07-12 14:05:03.572570 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:05:03.572581 | orchestrator | 2025-07-12 14:05:03.572591 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-07-12 14:05:03.572602 | orchestrator | Saturday 12 July 2025 14:03:52 +0000 (0:00:02.049) 0:01:46.788 ********* 2025-07-12 14:05:03.572612 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:05:03.572623 | orchestrator | 2025-07-12 14:05:03.572634 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-07-12 14:05:03.572645 | orchestrator | Saturday 12 July 2025 14:03:54 +0000 (0:00:02.426) 0:01:49.215 ********* 2025-07-12 14:05:03.572655 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:05:03.572666 | orchestrator | 2025-07-12 14:05:03.572683 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-07-12 14:05:03.572694 | orchestrator | Saturday 12 July 2025 14:03:56 +0000 (0:00:02.073) 0:01:51.288 ********* 2025-07-12 14:05:03.572705 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:05:03.572715 | orchestrator | 2025-07-12 14:05:03.572726 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-07-12 14:05:03.572737 | orchestrator | Saturday 12 July 2025 14:04:22 +0000 (0:00:25.781) 0:02:17.070 ********* 2025-07-12 14:05:03.572747 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:05:03.572758 | orchestrator | 2025-07-12 14:05:03.572768 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-07-12 14:05:03.572785 | orchestrator | Saturday 12 July 2025 14:04:25 +0000 (0:00:02.413) 0:02:19.483 ********* 2025-07-12 14:05:03.572796 | orchestrator | 2025-07-12 14:05:03.572807 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-07-12 14:05:03.572818 | orchestrator | Saturday 12 July 2025 14:04:25 +0000 (0:00:00.063) 0:02:19.547 ********* 2025-07-12 14:05:03.572828 | orchestrator | 2025-07-12 14:05:03.572839 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-07-12 14:05:03.572849 | orchestrator | Saturday 12 July 2025 14:04:25 +0000 (0:00:00.077) 0:02:19.624 ********* 2025-07-12 14:05:03.572860 | orchestrator | 2025-07-12 14:05:03.572871 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-07-12 14:05:03.572881 | orchestrator | Saturday 12 July 2025 14:04:25 +0000 (0:00:00.069) 0:02:19.694 ********* 2025-07-12 14:05:03.572892 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:05:03.572902 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:05:03.572913 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:05:03.572923 | orchestrator | 2025-07-12 14:05:03.572934 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:05:03.572945 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-12 14:05:03.572957 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-12 14:05:03.572967 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-12 14:05:03.572978 | orchestrator | 2025-07-12 14:05:03.572989 | orchestrator | 2025-07-12 14:05:03.572999 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:05:03.573010 | orchestrator | Saturday 12 July 2025 14:05:01 +0000 (0:00:36.637) 0:02:56.331 ********* 2025-07-12 14:05:03.573020 | orchestrator | =============================================================================== 2025-07-12 14:05:03.573031 | orchestrator | glance : Restart glance-api container ---------------------------------- 36.64s 2025-07-12 14:05:03.573042 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 25.78s 2025-07-12 14:05:03.573052 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 8.66s 2025-07-12 14:05:03.573062 | orchestrator | glance : Ensuring config directories exist ------------------------------ 7.99s 2025-07-12 14:05:03.573073 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 7.41s 2025-07-12 14:05:03.573084 | orchestrator | glance : Copying over config.json files for services -------------------- 6.81s 2025-07-12 14:05:03.573094 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 6.76s 2025-07-12 14:05:03.573105 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.16s 2025-07-12 14:05:03.573115 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 5.17s 2025-07-12 14:05:03.573126 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 4.74s 2025-07-12 14:05:03.573136 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.55s 2025-07-12 14:05:03.573147 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.22s 2025-07-12 14:05:03.573158 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.15s 2025-07-12 14:05:03.573168 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.89s 2025-07-12 14:05:03.573179 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.81s 2025-07-12 14:05:03.573189 | orchestrator | glance : Check glance containers ---------------------------------------- 3.44s 2025-07-12 14:05:03.573200 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.41s 2025-07-12 14:05:03.573215 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.38s 2025-07-12 14:05:03.573232 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.37s 2025-07-12 14:05:03.573243 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.17s 2025-07-12 14:05:06.624315 | orchestrator | 2025-07-12 14:05:06 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:05:06.624494 | orchestrator | 2025-07-12 14:05:06 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:05:06.625342 | orchestrator | 2025-07-12 14:05:06 | INFO  | Task 37ed18e3-d297-4532-8568-7cf9eb8dad37 is in state STARTED 2025-07-12 14:05:06.626073 | orchestrator | 2025-07-12 14:05:06 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:05:06.626411 | orchestrator | 2025-07-12 14:05:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:09.673516 | orchestrator | 2025-07-12 14:05:09 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:05:09.675801 | orchestrator | 2025-07-12 14:05:09 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:05:09.675836 | orchestrator | 2025-07-12 14:05:09 | INFO  | Task 37ed18e3-d297-4532-8568-7cf9eb8dad37 is in state STARTED 2025-07-12 14:05:09.675848 | orchestrator | 2025-07-12 14:05:09 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:05:09.675860 | orchestrator | 2025-07-12 14:05:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:12.723029 | orchestrator | 2025-07-12 14:05:12 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:05:12.724687 | orchestrator | 2025-07-12 14:05:12 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:05:12.726461 | orchestrator | 2025-07-12 14:05:12 | INFO  | Task 37ed18e3-d297-4532-8568-7cf9eb8dad37 is in state STARTED 2025-07-12 14:05:12.727927 | orchestrator | 2025-07-12 14:05:12 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:05:12.728150 | orchestrator | 2025-07-12 14:05:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:15.776803 | orchestrator | 2025-07-12 14:05:15 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:05:15.778751 | orchestrator | 2025-07-12 14:05:15 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state STARTED 2025-07-12 14:05:15.780861 | orchestrator | 2025-07-12 14:05:15 | INFO  | Task 37ed18e3-d297-4532-8568-7cf9eb8dad37 is in state STARTED 2025-07-12 14:05:15.782558 | orchestrator | 2025-07-12 14:05:15 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:05:15.782586 | orchestrator | 2025-07-12 14:05:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:18.825646 | orchestrator | 2025-07-12 14:05:18 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:05:18.829103 | orchestrator | 2025-07-12 14:05:18 | INFO  | Task 48d81262-b5d3-474e-8571-4fb564f89c84 is in state SUCCESS 2025-07-12 14:05:18.831272 | orchestrator | 2025-07-12 14:05:18.831341 | orchestrator | 2025-07-12 14:05:18.831416 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 14:05:18.831429 | orchestrator | 2025-07-12 14:05:18.831440 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 14:05:18.831452 | orchestrator | Saturday 12 July 2025 14:02:10 +0000 (0:00:00.271) 0:00:00.271 ********* 2025-07-12 14:05:18.831463 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:05:18.831474 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:05:18.831485 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:05:18.831496 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:05:18.831584 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:05:18.831598 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:05:18.831609 | orchestrator | 2025-07-12 14:05:18.832268 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 14:05:18.832282 | orchestrator | Saturday 12 July 2025 14:02:11 +0000 (0:00:00.751) 0:00:01.023 ********* 2025-07-12 14:05:18.832326 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-07-12 14:05:18.832338 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-07-12 14:05:18.832349 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-07-12 14:05:18.832360 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-07-12 14:05:18.832371 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-07-12 14:05:18.832382 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-07-12 14:05:18.832393 | orchestrator | 2025-07-12 14:05:18.832404 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-07-12 14:05:18.832414 | orchestrator | 2025-07-12 14:05:18.832425 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-12 14:05:18.832436 | orchestrator | Saturday 12 July 2025 14:02:12 +0000 (0:00:00.668) 0:00:01.692 ********* 2025-07-12 14:05:18.832464 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 14:05:18.832476 | orchestrator | 2025-07-12 14:05:18.832488 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-07-12 14:05:18.832498 | orchestrator | Saturday 12 July 2025 14:02:13 +0000 (0:00:01.257) 0:00:02.950 ********* 2025-07-12 14:05:18.832510 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-07-12 14:05:18.832520 | orchestrator | 2025-07-12 14:05:18.832531 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-07-12 14:05:18.832542 | orchestrator | Saturday 12 July 2025 14:02:16 +0000 (0:00:03.338) 0:00:06.288 ********* 2025-07-12 14:05:18.832553 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-07-12 14:05:18.832564 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-07-12 14:05:18.832574 | orchestrator | 2025-07-12 14:05:18.832585 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-07-12 14:05:18.832596 | orchestrator | Saturday 12 July 2025 14:02:22 +0000 (0:00:06.119) 0:00:12.408 ********* 2025-07-12 14:05:18.832607 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 14:05:18.832618 | orchestrator | 2025-07-12 14:05:18.832629 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-07-12 14:05:18.832640 | orchestrator | Saturday 12 July 2025 14:02:25 +0000 (0:00:03.010) 0:00:15.419 ********* 2025-07-12 14:05:18.832650 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 14:05:18.832661 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-07-12 14:05:18.832672 | orchestrator | 2025-07-12 14:05:18.832683 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-07-12 14:05:18.832695 | orchestrator | Saturday 12 July 2025 14:02:29 +0000 (0:00:03.669) 0:00:19.088 ********* 2025-07-12 14:05:18.832706 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 14:05:18.832717 | orchestrator | 2025-07-12 14:05:18.832728 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-07-12 14:05:18.832738 | orchestrator | Saturday 12 July 2025 14:02:33 +0000 (0:00:04.070) 0:00:23.159 ********* 2025-07-12 14:05:18.832749 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-07-12 14:05:18.832760 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-07-12 14:05:18.832771 | orchestrator | 2025-07-12 14:05:18.832781 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-07-12 14:05:18.832804 | orchestrator | Saturday 12 July 2025 14:02:41 +0000 (0:00:08.012) 0:00:31.172 ********* 2025-07-12 14:05:18.832868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 14:05:18.832888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 14:05:18.832907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 14:05:18.832921 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.832935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.832958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.833006 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.833027 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.833041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.833054 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.833075 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.833120 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.833134 | orchestrator | 2025-07-12 14:05:18.833147 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-12 14:05:18.833160 | orchestrator | Saturday 12 July 2025 14:02:44 +0000 (0:00:02.561) 0:00:33.733 ********* 2025-07-12 14:05:18.833172 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:05:18.833184 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:05:18.833196 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:05:18.833208 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:05:18.833219 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:05:18.833230 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:05:18.833240 | orchestrator | 2025-07-12 14:05:18.833251 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-12 14:05:18.833262 | orchestrator | Saturday 12 July 2025 14:02:44 +0000 (0:00:00.515) 0:00:34.249 ********* 2025-07-12 14:05:18.833273 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:05:18.833319 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:05:18.833332 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:05:18.833342 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 14:05:18.833353 | orchestrator | 2025-07-12 14:05:18.833364 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-07-12 14:05:18.833374 | orchestrator | Saturday 12 July 2025 14:02:45 +0000 (0:00:00.944) 0:00:35.193 ********* 2025-07-12 14:05:18.833385 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-07-12 14:05:18.833396 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-07-12 14:05:18.833406 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-07-12 14:05:18.833423 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-07-12 14:05:18.833434 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-07-12 14:05:18.833445 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-07-12 14:05:18.833455 | orchestrator | 2025-07-12 14:05:18.833466 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-07-12 14:05:18.833477 | orchestrator | Saturday 12 July 2025 14:02:47 +0000 (0:00:01.942) 0:00:37.136 ********* 2025-07-12 14:05:18.833489 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-12 14:05:18.833513 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-12 14:05:18.833557 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-12 14:05:18.833570 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-12 14:05:18.833587 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-12 14:05:18.833599 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-12 14:05:18.833617 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-12 14:05:18.833659 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-12 14:05:18.833682 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-12 14:05:18.833699 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-12 14:05:18.833720 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-12 14:05:18.833731 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-12 14:05:18.833742 | orchestrator | 2025-07-12 14:05:18.833753 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-07-12 14:05:18.833764 | orchestrator | Saturday 12 July 2025 14:02:51 +0000 (0:00:03.341) 0:00:40.477 ********* 2025-07-12 14:05:18.833775 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 14:05:18.833786 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 14:05:18.833797 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 14:05:18.833808 | orchestrator | 2025-07-12 14:05:18.833818 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-07-12 14:05:18.833829 | orchestrator | Saturday 12 July 2025 14:02:52 +0000 (0:00:01.597) 0:00:42.075 ********* 2025-07-12 14:05:18.833870 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-07-12 14:05:18.833883 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-07-12 14:05:18.833893 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-07-12 14:05:18.833904 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-07-12 14:05:18.833914 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-07-12 14:05:18.833925 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-07-12 14:05:18.833935 | orchestrator | 2025-07-12 14:05:18.833946 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-07-12 14:05:18.833957 | orchestrator | Saturday 12 July 2025 14:02:55 +0000 (0:00:03.295) 0:00:45.370 ********* 2025-07-12 14:05:18.833967 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-07-12 14:05:18.833978 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-07-12 14:05:18.833988 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-07-12 14:05:18.833999 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-07-12 14:05:18.834065 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-07-12 14:05:18.834081 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-07-12 14:05:18.834100 | orchestrator | 2025-07-12 14:05:18.834111 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-07-12 14:05:18.834122 | orchestrator | Saturday 12 July 2025 14:02:57 +0000 (0:00:01.248) 0:00:46.619 ********* 2025-07-12 14:05:18.834132 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:05:18.834143 | orchestrator | 2025-07-12 14:05:18.834153 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-07-12 14:05:18.834164 | orchestrator | Saturday 12 July 2025 14:02:57 +0000 (0:00:00.185) 0:00:46.804 ********* 2025-07-12 14:05:18.834186 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:05:18.834197 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:05:18.834208 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:05:18.834218 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:05:18.834228 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:05:18.834239 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:05:18.834249 | orchestrator | 2025-07-12 14:05:18.834260 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-12 14:05:18.834271 | orchestrator | Saturday 12 July 2025 14:02:58 +0000 (0:00:01.394) 0:00:48.199 ********* 2025-07-12 14:05:18.834282 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 14:05:18.834351 | orchestrator | 2025-07-12 14:05:18.834362 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-07-12 14:05:18.834373 | orchestrator | Saturday 12 July 2025 14:03:00 +0000 (0:00:01.460) 0:00:49.660 ********* 2025-07-12 14:05:18.834385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 14:05:18.834397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 14:05:18.834450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 14:05:18.834477 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.834489 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.834501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.834512 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.834555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.834575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.834592 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.834603 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.834614 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.834626 | orchestrator | 2025-07-12 14:05:18.834637 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-07-12 14:05:18.834647 | orchestrator | Saturday 12 July 2025 14:03:03 +0000 (0:00:03.703) 0:00:53.364 ********* 2025-07-12 14:05:18.834664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 14:05:18.834682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:05:18.834697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 14:05:18.834709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:05:18.834720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 14:05:18.834731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:05:18.834742 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:05:18.834753 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:05:18.834770 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:05:18.834790 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 14:05:18.834802 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 14:05:18.834818 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:05:18.834830 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 14:05:18.834841 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 14:05:18.834852 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:05:18.834863 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 14:05:18.834889 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 14:05:18.834901 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:05:18.834912 | orchestrator | 2025-07-12 14:05:18.834922 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-07-12 14:05:18.834933 | orchestrator | Saturday 12 July 2025 14:03:07 +0000 (0:00:03.398) 0:00:56.763 ********* 2025-07-12 14:05:18.834949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 14:05:18.834961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:05:18.834972 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:05:18.834983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 14:05:18.834994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:05:18.835011 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:05:18.835029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 14:05:18.835041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:05:18.835052 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:05:18.835068 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 14:05:18.835079 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 14:05:18.835090 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:05:18.835101 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 14:05:18.835125 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 14:05:18.835136 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:05:18.835148 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 14:05:18.835164 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 14:05:18.835175 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:05:18.835186 | orchestrator | 2025-07-12 14:05:18.835197 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-07-12 14:05:18.835207 | orchestrator | Saturday 12 July 2025 14:03:10 +0000 (0:00:03.055) 0:00:59.819 ********* 2025-07-12 14:05:18.835218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 14:05:18.835239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 14:05:18.835257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 14:05:18.835273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.835307 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.835320 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.835338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.835356 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.835368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.835384 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.835396 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.835413 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.835424 | orchestrator | 2025-07-12 14:05:18.835435 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-07-12 14:05:18.835446 | orchestrator | Saturday 12 July 2025 14:03:14 +0000 (0:00:04.076) 0:01:03.895 ********* 2025-07-12 14:05:18.835457 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-07-12 14:05:18.835467 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:05:18.835478 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-07-12 14:05:18.835489 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:05:18.835500 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-07-12 14:05:18.835511 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-07-12 14:05:18.835521 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-07-12 14:05:18.835532 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:05:18.835548 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-07-12 14:05:18.835559 | orchestrator | 2025-07-12 14:05:18.835570 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-07-12 14:05:18.835581 | orchestrator | Saturday 12 July 2025 14:03:16 +0000 (0:00:02.401) 0:01:06.297 ********* 2025-07-12 14:05:18.835592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 14:05:18.835608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 14:05:18.835625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 14:05:18.835637 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.835655 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.835671 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.835682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.835699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.835710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.835722 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.835738 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.835750 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.835761 | orchestrator | 2025-07-12 14:05:18.835776 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-07-12 14:05:18.835788 | orchestrator | Saturday 12 July 2025 14:03:27 +0000 (0:00:10.460) 0:01:16.757 ********* 2025-07-12 14:05:18.835798 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:05:18.835809 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:05:18.835826 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:05:18.835837 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:05:18.835848 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:05:18.835858 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:05:18.835869 | orchestrator | 2025-07-12 14:05:18.835880 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-07-12 14:05:18.835890 | orchestrator | Saturday 12 July 2025 14:03:30 +0000 (0:00:03.287) 0:01:20.045 ********* 2025-07-12 14:05:18.835901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 14:05:18.835913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:05:18.835924 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:05:18.835940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 14:05:18.835951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:05:18.835963 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:05:18.835979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 14:05:18.835996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:05:18.836007 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:05:18.836018 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 14:05:18.836030 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 14:05:18.836041 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:05:18.836057 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 14:05:18.836074 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 14:05:18.836091 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:05:18.836103 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 14:05:18.836114 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 14:05:18.836125 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:05:18.836135 | orchestrator | 2025-07-12 14:05:18.836146 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-07-12 14:05:18.836157 | orchestrator | Saturday 12 July 2025 14:03:32 +0000 (0:00:01.489) 0:01:21.534 ********* 2025-07-12 14:05:18.836167 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:05:18.836178 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:05:18.836188 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:05:18.836199 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:05:18.836209 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:05:18.836220 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:05:18.836230 | orchestrator | 2025-07-12 14:05:18.836241 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-07-12 14:05:18.836252 | orchestrator | Saturday 12 July 2025 14:03:32 +0000 (0:00:00.882) 0:01:22.417 ********* 2025-07-12 14:05:18.836269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 14:05:18.836383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 14:05:18.836396 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.836408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 14:05:18.836419 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.836437 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.836463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.836474 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.836486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.836497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.836508 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.836526 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:18.836546 | orchestrator | 2025-07-12 14:05:18.836557 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-12 14:05:18.836568 | orchestrator | Saturday 12 July 2025 14:03:35 +0000 (0:00:02.643) 0:01:25.062 ********* 2025-07-12 14:05:18.836579 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:05:18.836590 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:05:18.836600 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:05:18.836611 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:05:18.836621 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:05:18.836632 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:05:18.836642 | orchestrator | 2025-07-12 14:05:18.836653 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-07-12 14:05:18.836664 | orchestrator | Saturday 12 July 2025 14:03:36 +0000 (0:00:01.110) 0:01:26.172 ********* 2025-07-12 14:05:18.836674 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:05:18.836685 | orchestrator | 2025-07-12 14:05:18.836700 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-07-12 14:05:18.836711 | orchestrator | Saturday 12 July 2025 14:03:38 +0000 (0:00:02.017) 0:01:28.190 ********* 2025-07-12 14:05:18.836722 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:05:18.836732 | orchestrator | 2025-07-12 14:05:18.836743 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-07-12 14:05:18.836754 | orchestrator | Saturday 12 July 2025 14:03:40 +0000 (0:00:02.207) 0:01:30.397 ********* 2025-07-12 14:05:18.836764 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:05:18.836774 | orchestrator | 2025-07-12 14:05:18.836785 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-12 14:05:18.836796 | orchestrator | Saturday 12 July 2025 14:04:04 +0000 (0:00:23.315) 0:01:53.713 ********* 2025-07-12 14:05:18.836806 | orchestrator | 2025-07-12 14:05:18.836817 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-12 14:05:18.836828 | orchestrator | Saturday 12 July 2025 14:04:04 +0000 (0:00:00.080) 0:01:53.793 ********* 2025-07-12 14:05:18.836838 | orchestrator | 2025-07-12 14:05:18.836849 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-12 14:05:18.836859 | orchestrator | Saturday 12 July 2025 14:04:04 +0000 (0:00:00.065) 0:01:53.859 ********* 2025-07-12 14:05:18.836870 | orchestrator | 2025-07-12 14:05:18.836880 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-12 14:05:18.836891 | orchestrator | Saturday 12 July 2025 14:04:04 +0000 (0:00:00.065) 0:01:53.924 ********* 2025-07-12 14:05:18.836901 | orchestrator | 2025-07-12 14:05:18.836912 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-12 14:05:18.836922 | orchestrator | Saturday 12 July 2025 14:04:04 +0000 (0:00:00.063) 0:01:53.987 ********* 2025-07-12 14:05:18.836933 | orchestrator | 2025-07-12 14:05:18.836944 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-12 14:05:18.836954 | orchestrator | Saturday 12 July 2025 14:04:04 +0000 (0:00:00.065) 0:01:54.053 ********* 2025-07-12 14:05:18.836965 | orchestrator | 2025-07-12 14:05:18.836975 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-07-12 14:05:18.836986 | orchestrator | Saturday 12 July 2025 14:04:04 +0000 (0:00:00.063) 0:01:54.116 ********* 2025-07-12 14:05:18.836996 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:05:18.837007 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:05:18.837024 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:05:18.837035 | orchestrator | 2025-07-12 14:05:18.837046 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-07-12 14:05:18.837056 | orchestrator | Saturday 12 July 2025 14:04:27 +0000 (0:00:22.814) 0:02:16.930 ********* 2025-07-12 14:05:18.837067 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:05:18.837077 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:05:18.837088 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:05:18.837098 | orchestrator | 2025-07-12 14:05:18.837109 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-07-12 14:05:18.837120 | orchestrator | Saturday 12 July 2025 14:04:38 +0000 (0:00:10.819) 0:02:27.749 ********* 2025-07-12 14:05:18.837130 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:05:18.837141 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:05:18.837152 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:05:18.837162 | orchestrator | 2025-07-12 14:05:18.837173 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-07-12 14:05:18.837183 | orchestrator | Saturday 12 July 2025 14:05:11 +0000 (0:00:33.518) 0:03:01.268 ********* 2025-07-12 14:05:18.837194 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:05:18.837204 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:05:18.837215 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:05:18.837225 | orchestrator | 2025-07-12 14:05:18.837236 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-07-12 14:05:18.837247 | orchestrator | Saturday 12 July 2025 14:05:17 +0000 (0:00:05.791) 0:03:07.059 ********* 2025-07-12 14:05:18.837257 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:05:18.837268 | orchestrator | 2025-07-12 14:05:18.837278 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:05:18.837349 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-12 14:05:18.837362 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-12 14:05:18.837373 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-12 14:05:18.837384 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-12 14:05:18.837395 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-12 14:05:18.837405 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-12 14:05:18.837416 | orchestrator | 2025-07-12 14:05:18.837427 | orchestrator | 2025-07-12 14:05:18.837437 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:05:18.837447 | orchestrator | Saturday 12 July 2025 14:05:18 +0000 (0:00:00.630) 0:03:07.690 ********* 2025-07-12 14:05:18.837456 | orchestrator | =============================================================================== 2025-07-12 14:05:18.837465 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 33.52s 2025-07-12 14:05:18.837475 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 23.32s 2025-07-12 14:05:18.837489 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 22.81s 2025-07-12 14:05:18.837499 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.82s 2025-07-12 14:05:18.837509 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.46s 2025-07-12 14:05:18.837518 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.01s 2025-07-12 14:05:18.837527 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.12s 2025-07-12 14:05:18.837547 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 5.79s 2025-07-12 14:05:18.837556 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.08s 2025-07-12 14:05:18.837566 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 4.07s 2025-07-12 14:05:18.837575 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.70s 2025-07-12 14:05:18.837584 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.67s 2025-07-12 14:05:18.837594 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS certificate --- 3.40s 2025-07-12 14:05:18.837603 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.34s 2025-07-12 14:05:18.837613 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.34s 2025-07-12 14:05:18.837622 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.30s 2025-07-12 14:05:18.837631 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 3.29s 2025-07-12 14:05:18.837641 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS key ------ 3.06s 2025-07-12 14:05:18.837650 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.01s 2025-07-12 14:05:18.837659 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.64s 2025-07-12 14:05:18.837669 | orchestrator | 2025-07-12 14:05:18 | INFO  | Task 37ed18e3-d297-4532-8568-7cf9eb8dad37 is in state STARTED 2025-07-12 14:05:18.837678 | orchestrator | 2025-07-12 14:05:18 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:05:18.837688 | orchestrator | 2025-07-12 14:05:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:21.887642 | orchestrator | 2025-07-12 14:05:21 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:05:21.889445 | orchestrator | 2025-07-12 14:05:21 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:05:21.891520 | orchestrator | 2025-07-12 14:05:21 | INFO  | Task 37ed18e3-d297-4532-8568-7cf9eb8dad37 is in state STARTED 2025-07-12 14:05:21.893049 | orchestrator | 2025-07-12 14:05:21 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:05:21.893070 | orchestrator | 2025-07-12 14:05:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:24.950675 | orchestrator | 2025-07-12 14:05:24 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:05:24.951318 | orchestrator | 2025-07-12 14:05:24 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:05:24.955354 | orchestrator | 2025-07-12 14:05:24 | INFO  | Task 37ed18e3-d297-4532-8568-7cf9eb8dad37 is in state STARTED 2025-07-12 14:05:24.956831 | orchestrator | 2025-07-12 14:05:24 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:05:24.956853 | orchestrator | 2025-07-12 14:05:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:28.000960 | orchestrator | 2025-07-12 14:05:27 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:05:28.001201 | orchestrator | 2025-07-12 14:05:27 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:05:28.005603 | orchestrator | 2025-07-12 14:05:28 | INFO  | Task 37ed18e3-d297-4532-8568-7cf9eb8dad37 is in state STARTED 2025-07-12 14:05:28.005634 | orchestrator | 2025-07-12 14:05:28 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:05:28.005646 | orchestrator | 2025-07-12 14:05:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:31.049432 | orchestrator | 2025-07-12 14:05:31 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:05:31.054511 | orchestrator | 2025-07-12 14:05:31 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:05:31.055271 | orchestrator | 2025-07-12 14:05:31 | INFO  | Task 37ed18e3-d297-4532-8568-7cf9eb8dad37 is in state STARTED 2025-07-12 14:05:31.056223 | orchestrator | 2025-07-12 14:05:31 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:05:31.056400 | orchestrator | 2025-07-12 14:05:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:34.098148 | orchestrator | 2025-07-12 14:05:34 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:05:34.098254 | orchestrator | 2025-07-12 14:05:34 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:05:34.099536 | orchestrator | 2025-07-12 14:05:34 | INFO  | Task 37ed18e3-d297-4532-8568-7cf9eb8dad37 is in state STARTED 2025-07-12 14:05:34.103775 | orchestrator | 2025-07-12 14:05:34 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:05:34.103813 | orchestrator | 2025-07-12 14:05:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:37.143991 | orchestrator | 2025-07-12 14:05:37 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:05:37.145752 | orchestrator | 2025-07-12 14:05:37 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:05:37.147131 | orchestrator | 2025-07-12 14:05:37 | INFO  | Task 37ed18e3-d297-4532-8568-7cf9eb8dad37 is in state STARTED 2025-07-12 14:05:37.149564 | orchestrator | 2025-07-12 14:05:37 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:05:37.149761 | orchestrator | 2025-07-12 14:05:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:40.193847 | orchestrator | 2025-07-12 14:05:40 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:05:40.195170 | orchestrator | 2025-07-12 14:05:40 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:05:40.196839 | orchestrator | 2025-07-12 14:05:40 | INFO  | Task 37ed18e3-d297-4532-8568-7cf9eb8dad37 is in state STARTED 2025-07-12 14:05:40.199229 | orchestrator | 2025-07-12 14:05:40 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:05:40.199437 | orchestrator | 2025-07-12 14:05:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:43.244375 | orchestrator | 2025-07-12 14:05:43 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:05:43.245828 | orchestrator | 2025-07-12 14:05:43 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:05:43.246997 | orchestrator | 2025-07-12 14:05:43 | INFO  | Task 37ed18e3-d297-4532-8568-7cf9eb8dad37 is in state STARTED 2025-07-12 14:05:43.249060 | orchestrator | 2025-07-12 14:05:43 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:05:43.249098 | orchestrator | 2025-07-12 14:05:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:46.291204 | orchestrator | 2025-07-12 14:05:46 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:05:46.291842 | orchestrator | 2025-07-12 14:05:46 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:05:46.294113 | orchestrator | 2025-07-12 14:05:46 | INFO  | Task 37ed18e3-d297-4532-8568-7cf9eb8dad37 is in state STARTED 2025-07-12 14:05:46.294748 | orchestrator | 2025-07-12 14:05:46 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:05:46.294807 | orchestrator | 2025-07-12 14:05:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:49.336414 | orchestrator | 2025-07-12 14:05:49 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:05:49.338740 | orchestrator | 2025-07-12 14:05:49 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:05:49.338771 | orchestrator | 2025-07-12 14:05:49 | INFO  | Task 37ed18e3-d297-4532-8568-7cf9eb8dad37 is in state STARTED 2025-07-12 14:05:49.338782 | orchestrator | 2025-07-12 14:05:49 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:05:49.338792 | orchestrator | 2025-07-12 14:05:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:52.381742 | orchestrator | 2025-07-12 14:05:52 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:05:52.383840 | orchestrator | 2025-07-12 14:05:52 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:05:52.384916 | orchestrator | 2025-07-12 14:05:52 | INFO  | Task 37ed18e3-d297-4532-8568-7cf9eb8dad37 is in state STARTED 2025-07-12 14:05:52.386354 | orchestrator | 2025-07-12 14:05:52 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:05:52.386512 | orchestrator | 2025-07-12 14:05:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:55.431067 | orchestrator | 2025-07-12 14:05:55 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:05:55.432100 | orchestrator | 2025-07-12 14:05:55 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:05:55.432805 | orchestrator | 2025-07-12 14:05:55 | INFO  | Task 37ed18e3-d297-4532-8568-7cf9eb8dad37 is in state STARTED 2025-07-12 14:05:55.434545 | orchestrator | 2025-07-12 14:05:55 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:05:55.434560 | orchestrator | 2025-07-12 14:05:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:58.471086 | orchestrator | 2025-07-12 14:05:58 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:05:58.472074 | orchestrator | 2025-07-12 14:05:58 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:05:58.477904 | orchestrator | 2025-07-12 14:05:58 | INFO  | Task 37ed18e3-d297-4532-8568-7cf9eb8dad37 is in state STARTED 2025-07-12 14:05:58.478965 | orchestrator | 2025-07-12 14:05:58 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:05:58.478991 | orchestrator | 2025-07-12 14:05:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:01.511913 | orchestrator | 2025-07-12 14:06:01 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:06:01.512765 | orchestrator | 2025-07-12 14:06:01 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:06:01.514186 | orchestrator | 2025-07-12 14:06:01.514231 | orchestrator | 2025-07-12 14:06:01.514252 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 14:06:01.514314 | orchestrator | 2025-07-12 14:06:01.514337 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 14:06:01.514358 | orchestrator | Saturday 12 July 2025 14:05:06 +0000 (0:00:00.296) 0:00:00.296 ********* 2025-07-12 14:06:01.514377 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:06:01.514512 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:06:01.514527 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:06:01.514538 | orchestrator | 2025-07-12 14:06:01.514550 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 14:06:01.514587 | orchestrator | Saturday 12 July 2025 14:05:06 +0000 (0:00:00.301) 0:00:00.597 ********* 2025-07-12 14:06:01.514599 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-07-12 14:06:01.514610 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-07-12 14:06:01.514621 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-07-12 14:06:01.514631 | orchestrator | 2025-07-12 14:06:01.514642 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-07-12 14:06:01.514653 | orchestrator | 2025-07-12 14:06:01.514669 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-12 14:06:01.514688 | orchestrator | Saturday 12 July 2025 14:05:07 +0000 (0:00:00.399) 0:00:00.997 ********* 2025-07-12 14:06:01.514707 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:06:01.514721 | orchestrator | 2025-07-12 14:06:01.514731 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-07-12 14:06:01.514742 | orchestrator | Saturday 12 July 2025 14:05:07 +0000 (0:00:00.561) 0:00:01.558 ********* 2025-07-12 14:06:01.514753 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-07-12 14:06:01.514763 | orchestrator | 2025-07-12 14:06:01.514774 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-07-12 14:06:01.514785 | orchestrator | Saturday 12 July 2025 14:05:11 +0000 (0:00:03.477) 0:00:05.035 ********* 2025-07-12 14:06:01.514795 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-07-12 14:06:01.514806 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-07-12 14:06:01.514817 | orchestrator | 2025-07-12 14:06:01.514827 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-07-12 14:06:01.514838 | orchestrator | Saturday 12 July 2025 14:05:17 +0000 (0:00:06.620) 0:00:11.656 ********* 2025-07-12 14:06:01.514849 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 14:06:01.514860 | orchestrator | 2025-07-12 14:06:01.514870 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-07-12 14:06:01.514881 | orchestrator | Saturday 12 July 2025 14:05:21 +0000 (0:00:03.227) 0:00:14.883 ********* 2025-07-12 14:06:01.514891 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 14:06:01.514902 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-07-12 14:06:01.514956 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-07-12 14:06:01.514968 | orchestrator | 2025-07-12 14:06:01.514979 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-07-12 14:06:01.514989 | orchestrator | Saturday 12 July 2025 14:05:29 +0000 (0:00:08.221) 0:00:23.104 ********* 2025-07-12 14:06:01.515000 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 14:06:01.515010 | orchestrator | 2025-07-12 14:06:01.515021 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-07-12 14:06:01.515032 | orchestrator | Saturday 12 July 2025 14:05:32 +0000 (0:00:03.328) 0:00:26.433 ********* 2025-07-12 14:06:01.515042 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-07-12 14:06:01.515053 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-07-12 14:06:01.515063 | orchestrator | 2025-07-12 14:06:01.515078 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-07-12 14:06:01.515089 | orchestrator | Saturday 12 July 2025 14:05:39 +0000 (0:00:07.291) 0:00:33.725 ********* 2025-07-12 14:06:01.515099 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-07-12 14:06:01.515110 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-07-12 14:06:01.515120 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-07-12 14:06:01.515131 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-07-12 14:06:01.515152 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-07-12 14:06:01.515166 | orchestrator | 2025-07-12 14:06:01.515179 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-12 14:06:01.515191 | orchestrator | Saturday 12 July 2025 14:05:55 +0000 (0:00:15.604) 0:00:49.329 ********* 2025-07-12 14:06:01.515203 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:06:01.515215 | orchestrator | 2025-07-12 14:06:01.515228 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-07-12 14:06:01.515240 | orchestrator | Saturday 12 July 2025 14:05:56 +0000 (0:00:00.626) 0:00:49.956 ********* 2025-07-12 14:06:01.515254 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: keystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for compute service in RegionOne region not found 2025-07-12 14:06:01.515339 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible-tmp-1752329157.6876755-6742-220672978377162/AnsiballZ_compute_flavor.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1752329157.6876755-6742-220672978377162/AnsiballZ_compute_flavor.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1752329157.6876755-6742-220672978377162/AnsiballZ_compute_flavor.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.compute_flavor', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.compute_flavor', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_os_nova_flavor_payload_tduzoxcn/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 367, in \n File \"/tmp/ansible_os_nova_flavor_payload_tduzoxcn/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 363, in main\n File \"/tmp/ansible_os_nova_flavor_payload_tduzoxcn/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_nova_flavor_payload_tduzoxcn/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 220, in run\n File \"/opt/ansible/lib/python3.11/site-packages/openstack/service_description.py\", line 88, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/openstack/service_description.py\", line 286, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/identity/base.py\", line 272, in get_endpoint_data\n endpoint_data = service_catalog.endpoint_data_for(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/access/service_catalog.py\", line 459, in endpoint_data_for\n raise exceptions.EndpointNotFound(msg)\nkeystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for compute service in RegionOne region not found\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2025-07-12 14:06:01.515368 | orchestrator | 2025-07-12 14:06:01.515381 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:06:01.515395 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-07-12 14:06:01.515409 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:06:01.515422 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:06:01.515435 | orchestrator | 2025-07-12 14:06:01.515448 | orchestrator | 2025-07-12 14:06:01.515460 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:06:01.515473 | orchestrator | Saturday 12 July 2025 14:05:59 +0000 (0:00:03.316) 0:00:53.272 ********* 2025-07-12 14:06:01.515493 | orchestrator | =============================================================================== 2025-07-12 14:06:01.515506 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.60s 2025-07-12 14:06:01.515517 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.22s 2025-07-12 14:06:01.515528 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.29s 2025-07-12 14:06:01.515539 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.62s 2025-07-12 14:06:01.515549 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.48s 2025-07-12 14:06:01.515560 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.33s 2025-07-12 14:06:01.515570 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.32s 2025-07-12 14:06:01.515581 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.23s 2025-07-12 14:06:01.515591 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.63s 2025-07-12 14:06:01.515601 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.56s 2025-07-12 14:06:01.515612 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.40s 2025-07-12 14:06:01.515622 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-07-12 14:06:01.515633 | orchestrator | 2025-07-12 14:06:01 | INFO  | Task 37ed18e3-d297-4532-8568-7cf9eb8dad37 is in state SUCCESS 2025-07-12 14:06:01.515644 | orchestrator | 2025-07-12 14:06:01 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:06:01.515655 | orchestrator | 2025-07-12 14:06:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:04.565007 | orchestrator | 2025-07-12 14:06:04 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:06:04.566154 | orchestrator | 2025-07-12 14:06:04 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:06:04.567507 | orchestrator | 2025-07-12 14:06:04 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:06:04.567534 | orchestrator | 2025-07-12 14:06:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:07.620698 | orchestrator | 2025-07-12 14:06:07 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:06:07.623367 | orchestrator | 2025-07-12 14:06:07 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:06:07.625580 | orchestrator | 2025-07-12 14:06:07 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:06:07.625724 | orchestrator | 2025-07-12 14:06:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:10.673364 | orchestrator | 2025-07-12 14:06:10 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:06:10.673468 | orchestrator | 2025-07-12 14:06:10 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:06:10.675002 | orchestrator | 2025-07-12 14:06:10 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:06:10.675031 | orchestrator | 2025-07-12 14:06:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:13.718935 | orchestrator | 2025-07-12 14:06:13 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:06:13.719740 | orchestrator | 2025-07-12 14:06:13 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:06:13.720603 | orchestrator | 2025-07-12 14:06:13 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:06:13.720631 | orchestrator | 2025-07-12 14:06:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:16.769921 | orchestrator | 2025-07-12 14:06:16 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:06:16.773310 | orchestrator | 2025-07-12 14:06:16 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:06:16.774708 | orchestrator | 2025-07-12 14:06:16 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:06:16.775598 | orchestrator | 2025-07-12 14:06:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:19.821538 | orchestrator | 2025-07-12 14:06:19 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:06:19.823925 | orchestrator | 2025-07-12 14:06:19 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:06:19.826329 | orchestrator | 2025-07-12 14:06:19 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:06:19.826361 | orchestrator | 2025-07-12 14:06:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:22.868226 | orchestrator | 2025-07-12 14:06:22 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:06:22.869469 | orchestrator | 2025-07-12 14:06:22 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:06:22.870814 | orchestrator | 2025-07-12 14:06:22 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:06:22.870845 | orchestrator | 2025-07-12 14:06:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:25.918726 | orchestrator | 2025-07-12 14:06:25 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:06:25.919751 | orchestrator | 2025-07-12 14:06:25 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:06:25.921060 | orchestrator | 2025-07-12 14:06:25 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:06:25.921090 | orchestrator | 2025-07-12 14:06:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:28.962350 | orchestrator | 2025-07-12 14:06:28 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:06:28.964620 | orchestrator | 2025-07-12 14:06:28 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:06:28.967743 | orchestrator | 2025-07-12 14:06:28 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:06:28.967874 | orchestrator | 2025-07-12 14:06:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:32.015679 | orchestrator | 2025-07-12 14:06:32 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:06:32.018580 | orchestrator | 2025-07-12 14:06:32 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:06:32.021173 | orchestrator | 2025-07-12 14:06:32 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:06:32.021203 | orchestrator | 2025-07-12 14:06:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:35.070717 | orchestrator | 2025-07-12 14:06:35 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:06:35.071752 | orchestrator | 2025-07-12 14:06:35 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:06:35.072788 | orchestrator | 2025-07-12 14:06:35 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:06:35.072814 | orchestrator | 2025-07-12 14:06:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:38.111976 | orchestrator | 2025-07-12 14:06:38 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:06:38.113416 | orchestrator | 2025-07-12 14:06:38 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:06:38.114341 | orchestrator | 2025-07-12 14:06:38 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:06:38.114412 | orchestrator | 2025-07-12 14:06:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:41.155567 | orchestrator | 2025-07-12 14:06:41 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:06:41.157671 | orchestrator | 2025-07-12 14:06:41 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:06:41.159204 | orchestrator | 2025-07-12 14:06:41 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:06:41.159387 | orchestrator | 2025-07-12 14:06:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:44.203590 | orchestrator | 2025-07-12 14:06:44 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:06:44.204932 | orchestrator | 2025-07-12 14:06:44 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:06:44.206249 | orchestrator | 2025-07-12 14:06:44 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:06:44.206381 | orchestrator | 2025-07-12 14:06:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:47.248995 | orchestrator | 2025-07-12 14:06:47 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:06:47.251390 | orchestrator | 2025-07-12 14:06:47 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:06:47.254218 | orchestrator | 2025-07-12 14:06:47 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:06:47.254698 | orchestrator | 2025-07-12 14:06:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:50.302644 | orchestrator | 2025-07-12 14:06:50 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:06:50.303515 | orchestrator | 2025-07-12 14:06:50 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:06:50.304949 | orchestrator | 2025-07-12 14:06:50 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:06:50.304985 | orchestrator | 2025-07-12 14:06:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:53.348561 | orchestrator | 2025-07-12 14:06:53 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:06:53.350143 | orchestrator | 2025-07-12 14:06:53 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:06:53.351916 | orchestrator | 2025-07-12 14:06:53 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:06:53.351943 | orchestrator | 2025-07-12 14:06:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:56.394674 | orchestrator | 2025-07-12 14:06:56 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:06:56.396249 | orchestrator | 2025-07-12 14:06:56 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:06:56.397670 | orchestrator | 2025-07-12 14:06:56 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:06:56.397872 | orchestrator | 2025-07-12 14:06:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:59.442572 | orchestrator | 2025-07-12 14:06:59 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:06:59.445430 | orchestrator | 2025-07-12 14:06:59 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:06:59.447582 | orchestrator | 2025-07-12 14:06:59 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:06:59.447618 | orchestrator | 2025-07-12 14:06:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:02.502306 | orchestrator | 2025-07-12 14:07:02 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:07:02.503421 | orchestrator | 2025-07-12 14:07:02 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:07:02.504414 | orchestrator | 2025-07-12 14:07:02 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:07:02.504525 | orchestrator | 2025-07-12 14:07:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:05.561775 | orchestrator | 2025-07-12 14:07:05 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:07:05.563469 | orchestrator | 2025-07-12 14:07:05 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:07:05.565064 | orchestrator | 2025-07-12 14:07:05 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:07:05.565090 | orchestrator | 2025-07-12 14:07:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:08.602475 | orchestrator | 2025-07-12 14:07:08 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:07:08.604097 | orchestrator | 2025-07-12 14:07:08 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:07:08.605885 | orchestrator | 2025-07-12 14:07:08 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:07:08.605952 | orchestrator | 2025-07-12 14:07:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:11.661115 | orchestrator | 2025-07-12 14:07:11 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:07:11.661601 | orchestrator | 2025-07-12 14:07:11 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:07:11.664316 | orchestrator | 2025-07-12 14:07:11 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:07:11.664888 | orchestrator | 2025-07-12 14:07:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:14.709241 | orchestrator | 2025-07-12 14:07:14 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:07:14.712031 | orchestrator | 2025-07-12 14:07:14 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:07:14.714501 | orchestrator | 2025-07-12 14:07:14 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:07:14.714884 | orchestrator | 2025-07-12 14:07:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:17.753646 | orchestrator | 2025-07-12 14:07:17 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state STARTED 2025-07-12 14:07:17.756170 | orchestrator | 2025-07-12 14:07:17 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:07:17.757766 | orchestrator | 2025-07-12 14:07:17 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:07:17.757792 | orchestrator | 2025-07-12 14:07:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:20.803334 | orchestrator | 2025-07-12 14:07:20 | INFO  | Task d8890ef0-a147-4c75-972c-46fb96a5e8a8 is in state SUCCESS 2025-07-12 14:07:20.804212 | orchestrator | 2025-07-12 14:07:20 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:07:20.804603 | orchestrator | 2025-07-12 14:07:20 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:07:20.804631 | orchestrator | 2025-07-12 14:07:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:23.855410 | orchestrator | 2025-07-12 14:07:23 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:07:23.856305 | orchestrator | 2025-07-12 14:07:23 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:07:23.856516 | orchestrator | 2025-07-12 14:07:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:26.898172 | orchestrator | 2025-07-12 14:07:26 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:07:26.898314 | orchestrator | 2025-07-12 14:07:26 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:07:26.898331 | orchestrator | 2025-07-12 14:07:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:29.936985 | orchestrator | 2025-07-12 14:07:29 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:07:29.939511 | orchestrator | 2025-07-12 14:07:29 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:07:29.939544 | orchestrator | 2025-07-12 14:07:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:32.975823 | orchestrator | 2025-07-12 14:07:32 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state STARTED 2025-07-12 14:07:32.976230 | orchestrator | 2025-07-12 14:07:32 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:07:32.976314 | orchestrator | 2025-07-12 14:07:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:36.037874 | orchestrator | 2025-07-12 14:07:36 | INFO  | Task be75192a-6f7d-4ef1-9024-2d7d3bb7ccdb is in state SUCCESS 2025-07-12 14:07:36.039412 | orchestrator | 2025-07-12 14:07:36.039458 | orchestrator | 2025-07-12 14:07:36.039472 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 14:07:36.039484 | orchestrator | 2025-07-12 14:07:36.039495 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 14:07:36.039507 | orchestrator | Saturday 12 July 2025 14:03:55 +0000 (0:00:00.176) 0:00:00.176 ********* 2025-07-12 14:07:36.039630 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:07:36.039646 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:07:36.039684 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:07:36.039696 | orchestrator | 2025-07-12 14:07:36.039707 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 14:07:36.039738 | orchestrator | Saturday 12 July 2025 14:03:55 +0000 (0:00:00.309) 0:00:00.485 ********* 2025-07-12 14:07:36.039758 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-07-12 14:07:36.039777 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-07-12 14:07:36.039793 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-07-12 14:07:36.039809 | orchestrator | 2025-07-12 14:07:36.039826 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-07-12 14:07:36.039845 | orchestrator | 2025-07-12 14:07:36.039864 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-07-12 14:07:36.039876 | orchestrator | Saturday 12 July 2025 14:03:56 +0000 (0:00:00.628) 0:00:01.114 ********* 2025-07-12 14:07:36.039887 | orchestrator | 2025-07-12 14:07:36.039897 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-07-12 14:07:36.039908 | orchestrator | 2025-07-12 14:07:36.039919 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-07-12 14:07:36.039929 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:07:36.039940 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:07:36.039950 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:07:36.039963 | orchestrator | 2025-07-12 14:07:36.039976 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:07:36.039989 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:07:36.040004 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:07:36.040017 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:07:36.040029 | orchestrator | 2025-07-12 14:07:36.040042 | orchestrator | 2025-07-12 14:07:36.040054 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:07:36.040066 | orchestrator | Saturday 12 July 2025 14:07:18 +0000 (0:03:21.803) 0:03:22.917 ********* 2025-07-12 14:07:36.040079 | orchestrator | =============================================================================== 2025-07-12 14:07:36.040092 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 201.80s 2025-07-12 14:07:36.040104 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2025-07-12 14:07:36.040116 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-07-12 14:07:36.040129 | orchestrator | 2025-07-12 14:07:36.040142 | orchestrator | 2025-07-12 14:07:36.040154 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 14:07:36.040166 | orchestrator | 2025-07-12 14:07:36.040180 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 14:07:36.040192 | orchestrator | Saturday 12 July 2025 14:05:22 +0000 (0:00:00.256) 0:00:00.256 ********* 2025-07-12 14:07:36.040204 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:07:36.040215 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:07:36.040226 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:07:36.040236 | orchestrator | 2025-07-12 14:07:36.040268 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 14:07:36.040279 | orchestrator | Saturday 12 July 2025 14:05:22 +0000 (0:00:00.293) 0:00:00.549 ********* 2025-07-12 14:07:36.040290 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-07-12 14:07:36.040301 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-07-12 14:07:36.040311 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-07-12 14:07:36.040322 | orchestrator | 2025-07-12 14:07:36.040333 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-07-12 14:07:36.040344 | orchestrator | 2025-07-12 14:07:36.040364 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-07-12 14:07:36.040375 | orchestrator | Saturday 12 July 2025 14:05:23 +0000 (0:00:00.441) 0:00:00.991 ********* 2025-07-12 14:07:36.040386 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:07:36.040396 | orchestrator | 2025-07-12 14:07:36.040407 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-07-12 14:07:36.040418 | orchestrator | Saturday 12 July 2025 14:05:23 +0000 (0:00:00.514) 0:00:01.505 ********* 2025-07-12 14:07:36.040432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 14:07:36.040469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 14:07:36.040482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 14:07:36.040493 | orchestrator | 2025-07-12 14:07:36.040504 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-07-12 14:07:36.040515 | orchestrator | Saturday 12 July 2025 14:05:24 +0000 (0:00:00.710) 0:00:02.216 ********* 2025-07-12 14:07:36.040525 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-07-12 14:07:36.040536 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-07-12 14:07:36.040547 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 14:07:36.040558 | orchestrator | 2025-07-12 14:07:36.040568 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-07-12 14:07:36.040579 | orchestrator | Saturday 12 July 2025 14:05:25 +0000 (0:00:00.833) 0:00:03.049 ********* 2025-07-12 14:07:36.040590 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:07:36.040601 | orchestrator | 2025-07-12 14:07:36.040611 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-07-12 14:07:36.040622 | orchestrator | Saturday 12 July 2025 14:05:26 +0000 (0:00:00.683) 0:00:03.733 ********* 2025-07-12 14:07:36.040633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 14:07:36.040652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 14:07:36.040671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 14:07:36.040683 | orchestrator | 2025-07-12 14:07:36.040694 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-07-12 14:07:36.040704 | orchestrator | Saturday 12 July 2025 14:05:27 +0000 (0:00:01.421) 0:00:05.154 ********* 2025-07-12 14:07:36.040720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 14:07:36.040732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 14:07:36.040743 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:07:36.040754 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:07:36.040765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 14:07:36.040782 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:07:36.040793 | orchestrator | 2025-07-12 14:07:36.040804 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-07-12 14:07:36.040814 | orchestrator | Saturday 12 July 2025 14:05:27 +0000 (0:00:00.396) 0:00:05.551 ********* 2025-07-12 14:07:36.040825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 14:07:36.040836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 14:07:36.040848 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:07:36.040859 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:07:36.040881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 14:07:36.040893 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:07:36.040903 | orchestrator | 2025-07-12 14:07:36.040914 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-07-12 14:07:36.040924 | orchestrator | Saturday 12 July 2025 14:05:28 +0000 (0:00:00.778) 0:00:06.330 ********* 2025-07-12 14:07:36.040935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 14:07:36.040946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 14:07:36.040965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 14:07:36.040976 | orchestrator | 2025-07-12 14:07:36.040987 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-07-12 14:07:36.040998 | orchestrator | Saturday 12 July 2025 14:05:29 +0000 (0:00:01.193) 0:00:07.523 ********* 2025-07-12 14:07:36.041008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 14:07:36.041028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 14:07:36.041044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 14:07:36.041055 | orchestrator | 2025-07-12 14:07:36.041066 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-07-12 14:07:36.041077 | orchestrator | Saturday 12 July 2025 14:05:31 +0000 (0:00:01.314) 0:00:08.837 ********* 2025-07-12 14:07:36.041087 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:07:36.041104 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:07:36.041115 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:07:36.041125 | orchestrator | 2025-07-12 14:07:36.041135 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-07-12 14:07:36.041146 | orchestrator | Saturday 12 July 2025 14:05:31 +0000 (0:00:00.492) 0:00:09.330 ********* 2025-07-12 14:07:36.041157 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-07-12 14:07:36.041168 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-07-12 14:07:36.041178 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-07-12 14:07:36.041189 | orchestrator | 2025-07-12 14:07:36.041200 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-07-12 14:07:36.041210 | orchestrator | Saturday 12 July 2025 14:05:32 +0000 (0:00:01.196) 0:00:10.527 ********* 2025-07-12 14:07:36.041221 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-07-12 14:07:36.041232 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-07-12 14:07:36.041260 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-07-12 14:07:36.041271 | orchestrator | 2025-07-12 14:07:36.041281 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-07-12 14:07:36.041292 | orchestrator | Saturday 12 July 2025 14:05:34 +0000 (0:00:01.230) 0:00:11.757 ********* 2025-07-12 14:07:36.041302 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 14:07:36.041312 | orchestrator | 2025-07-12 14:07:36.041323 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-07-12 14:07:36.041334 | orchestrator | Saturday 12 July 2025 14:05:34 +0000 (0:00:00.725) 0:00:12.483 ********* 2025-07-12 14:07:36.041345 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-07-12 14:07:36.041355 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-07-12 14:07:36.041366 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:07:36.041376 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:07:36.041387 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:07:36.041397 | orchestrator | 2025-07-12 14:07:36.041408 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-07-12 14:07:36.041418 | orchestrator | Saturday 12 July 2025 14:05:35 +0000 (0:00:00.693) 0:00:13.177 ********* 2025-07-12 14:07:36.041429 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:07:36.041439 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:07:36.041450 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:07:36.041460 | orchestrator | 2025-07-12 14:07:36.041471 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-07-12 14:07:36.041481 | orchestrator | Saturday 12 July 2025 14:05:36 +0000 (0:00:00.616) 0:00:13.793 ********* 2025-07-12 14:07:36.041493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1078155, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.302797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.041516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1078155, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.302797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.041536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1078155, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.302797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.041547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1078148, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.2917972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.041558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1078148, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.2917972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.041569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1078148, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.2917972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.041581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1078143, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.2887971, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.041598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1078143, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.2887971, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.041627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1078143, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.2887971, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.041639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1078152, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.295797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.041650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1078152, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.295797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.041661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1078152, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.295797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.041672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1078136, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.283797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.041688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1078136, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.283797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.041711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1078136, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.283797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.041722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1078145, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.289797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.041734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1078145, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.289797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.041745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1078145, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.289797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.041756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1078151, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.2947972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.041768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1078151, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.2947972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.041798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1078151, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.2947972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.041810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1078134, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.2827969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.041822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1078134, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.2827969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.041833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1078134, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.2827969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.041844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1078115, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.271797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.041855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1078115, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.271797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.041881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1078115, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.271797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.041897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1078139, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.2857969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.041908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1078139, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.2857969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.041919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1078139, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.2857969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.041930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1078123, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.277797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.041941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1078123, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.277797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.041959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1078123, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.277797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.041981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1078150, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.293797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.041993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1078150, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.293797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1078150, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.293797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1078141, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.286797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1078141, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.286797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1078141, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.286797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1078154, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.296797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1078154, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.296797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1078154, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.296797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1078132, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.281797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1078132, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.281797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1078132, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.281797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1078146, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.290797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1078146, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.290797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1078146, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.290797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1078117, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.2767968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1078117, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.2767968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1078117, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.2767968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1078125, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.281797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1078125, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.281797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1078125, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.281797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1078142, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.287797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1078142, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.287797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1078142, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.287797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1078220, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3397975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1078220, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3397975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1078220, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3397975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1078206, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3227973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1078206, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3227973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1078206, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3227973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1078164, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3037972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1078164, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3037972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1078164, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3037972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1078270, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3467977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1078270, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3467977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1078270, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3467977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1078168, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3047972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1078168, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3047972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1078168, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3047972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1078268, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3437977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1078268, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3437977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1078268, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3437977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1078273, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3527977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1078273, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3527977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1078273, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3527977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1078255, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3407977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1078255, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3407977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1078255, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3407977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1078264, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3427978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1078264, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3427978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1078264, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3427978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1078173, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.305797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1078173, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.305797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1078173, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.305797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1078208, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3247974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1078208, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3247974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1078208, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3247974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1078285, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3557978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1078285, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3557978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1078285, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3557978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1078269, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3447976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1078269, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3447976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1078269, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3447976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1078185, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3107972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1078185, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3107972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.042990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1078185, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3107972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.043001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1078179, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3087974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.043012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1078179, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3087974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.043029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1078179, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3087974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.043049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1078192, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3117974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.043060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1078192, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3117974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.043078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1078192, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3117974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.043089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1078193, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3217974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.043100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1078193, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3217974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.043111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1078193, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3217974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.043134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1078216, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3257976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.043145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1078216, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3257976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.043162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1078216, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3257976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.043173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1078262, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3417976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.043184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1078262, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3417976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.043195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1078262, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3417976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.043217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1078217, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3257976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.043229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1078217, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3257976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.043272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1078217, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3257976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.043293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1078290, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3567977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.043311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1078290, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3567977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.043327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1078290, 'dev': 108, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752326026.3567977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.043343 | orchestrator | 2025-07-12 14:07:36.043360 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-07-12 14:07:36.043381 | orchestrator | Saturday 12 July 2025 14:06:12 +0000 (0:00:36.453) 0:00:50.247 ********* 2025-07-12 14:07:36.043416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 14:07:36.043429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 14:07:36.043450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 14:07:36.043461 | orchestrator | 2025-07-12 14:07:36.043472 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-07-12 14:07:36.043483 | orchestrator | Saturday 12 July 2025 14:06:13 +0000 (0:00:00.918) 0:00:51.166 ********* 2025-07-12 14:07:36.043494 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:07:36.043505 | orchestrator | 2025-07-12 14:07:36.043515 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-07-12 14:07:36.043526 | orchestrator | Saturday 12 July 2025 14:06:15 +0000 (0:00:02.221) 0:00:53.388 ********* 2025-07-12 14:07:36.043537 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:07:36.043547 | orchestrator | 2025-07-12 14:07:36.043558 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-07-12 14:07:36.043568 | orchestrator | Saturday 12 July 2025 14:06:17 +0000 (0:00:02.094) 0:00:55.483 ********* 2025-07-12 14:07:36.043579 | orchestrator | 2025-07-12 14:07:36.043589 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-07-12 14:07:36.043600 | orchestrator | Saturday 12 July 2025 14:06:18 +0000 (0:00:00.232) 0:00:55.715 ********* 2025-07-12 14:07:36.043611 | orchestrator | 2025-07-12 14:07:36.043622 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-07-12 14:07:36.043632 | orchestrator | Saturday 12 July 2025 14:06:18 +0000 (0:00:00.061) 0:00:55.777 ********* 2025-07-12 14:07:36.043643 | orchestrator | 2025-07-12 14:07:36.043653 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-07-12 14:07:36.043664 | orchestrator | Saturday 12 July 2025 14:06:18 +0000 (0:00:00.063) 0:00:55.841 ********* 2025-07-12 14:07:36.043674 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:07:36.043685 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:07:36.043695 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:07:36.043706 | orchestrator | 2025-07-12 14:07:36.043716 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-07-12 14:07:36.043727 | orchestrator | Saturday 12 July 2025 14:06:19 +0000 (0:00:01.832) 0:00:57.673 ********* 2025-07-12 14:07:36.043738 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:07:36.043748 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:07:36.043759 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-07-12 14:07:36.043770 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-07-12 14:07:36.043780 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-07-12 14:07:36.043791 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:07:36.043802 | orchestrator | 2025-07-12 14:07:36.043812 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-07-12 14:07:36.043823 | orchestrator | Saturday 12 July 2025 14:06:58 +0000 (0:00:38.179) 0:01:35.853 ********* 2025-07-12 14:07:36.043841 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:07:36.043852 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:07:36.043862 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:07:36.043873 | orchestrator | 2025-07-12 14:07:36.043884 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-07-12 14:07:36.043895 | orchestrator | Saturday 12 July 2025 14:07:30 +0000 (0:00:32.111) 0:02:07.964 ********* 2025-07-12 14:07:36.043905 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:07:36.043916 | orchestrator | 2025-07-12 14:07:36.043933 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-07-12 14:07:36.043944 | orchestrator | Saturday 12 July 2025 14:07:32 +0000 (0:00:02.368) 0:02:10.332 ********* 2025-07-12 14:07:36.043954 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:07:36.043965 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:07:36.043975 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:07:36.043985 | orchestrator | 2025-07-12 14:07:36.043996 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-07-12 14:07:36.044007 | orchestrator | Saturday 12 July 2025 14:07:32 +0000 (0:00:00.345) 0:02:10.678 ********* 2025-07-12 14:07:36.044024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-07-12 14:07:36.044036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-07-12 14:07:36.044048 | orchestrator | 2025-07-12 14:07:36.044059 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-07-12 14:07:36.044070 | orchestrator | Saturday 12 July 2025 14:07:35 +0000 (0:00:02.287) 0:02:12.965 ********* 2025-07-12 14:07:36.044080 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:07:36.044091 | orchestrator | 2025-07-12 14:07:36.044102 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:07:36.044113 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-12 14:07:36.044124 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-12 14:07:36.044135 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-12 14:07:36.044145 | orchestrator | 2025-07-12 14:07:36.044156 | orchestrator | 2025-07-12 14:07:36.044167 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:07:36.044177 | orchestrator | Saturday 12 July 2025 14:07:35 +0000 (0:00:00.257) 0:02:13.223 ********* 2025-07-12 14:07:36.044188 | orchestrator | =============================================================================== 2025-07-12 14:07:36.044198 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.18s 2025-07-12 14:07:36.044209 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 36.45s 2025-07-12 14:07:36.044219 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 32.11s 2025-07-12 14:07:36.044230 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.37s 2025-07-12 14:07:36.044258 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.29s 2025-07-12 14:07:36.044269 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.22s 2025-07-12 14:07:36.044280 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.09s 2025-07-12 14:07:36.044297 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.83s 2025-07-12 14:07:36.044308 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.42s 2025-07-12 14:07:36.044318 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.31s 2025-07-12 14:07:36.044329 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.23s 2025-07-12 14:07:36.044339 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.20s 2025-07-12 14:07:36.044350 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.19s 2025-07-12 14:07:36.044360 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.92s 2025-07-12 14:07:36.044371 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.83s 2025-07-12 14:07:36.044381 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.78s 2025-07-12 14:07:36.044391 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.73s 2025-07-12 14:07:36.044402 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.71s 2025-07-12 14:07:36.044412 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.69s 2025-07-12 14:07:36.044423 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.68s 2025-07-12 14:07:36.044433 | orchestrator | 2025-07-12 14:07:36 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:07:36.044444 | orchestrator | 2025-07-12 14:07:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:39.083513 | orchestrator | 2025-07-12 14:07:39 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:07:39.083592 | orchestrator | 2025-07-12 14:07:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:42.121749 | orchestrator | 2025-07-12 14:07:42 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:07:42.121847 | orchestrator | 2025-07-12 14:07:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:45.164510 | orchestrator | 2025-07-12 14:07:45 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:07:45.164622 | orchestrator | 2025-07-12 14:07:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:48.210013 | orchestrator | 2025-07-12 14:07:48 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:07:48.210187 | orchestrator | 2025-07-12 14:07:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:51.253685 | orchestrator | 2025-07-12 14:07:51 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:07:51.253794 | orchestrator | 2025-07-12 14:07:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:54.322425 | orchestrator | 2025-07-12 14:07:54 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:07:54.322531 | orchestrator | 2025-07-12 14:07:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:57.374819 | orchestrator | 2025-07-12 14:07:57 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:07:57.374926 | orchestrator | 2025-07-12 14:07:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:00.410300 | orchestrator | 2025-07-12 14:08:00 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:08:00.410386 | orchestrator | 2025-07-12 14:08:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:03.458607 | orchestrator | 2025-07-12 14:08:03 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:08:03.458719 | orchestrator | 2025-07-12 14:08:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:06.513145 | orchestrator | 2025-07-12 14:08:06 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:08:06.513251 | orchestrator | 2025-07-12 14:08:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:09.546395 | orchestrator | 2025-07-12 14:08:09 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:08:09.546504 | orchestrator | 2025-07-12 14:08:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:12.577597 | orchestrator | 2025-07-12 14:08:12 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:08:12.577703 | orchestrator | 2025-07-12 14:08:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:15.622906 | orchestrator | 2025-07-12 14:08:15 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:08:15.623014 | orchestrator | 2025-07-12 14:08:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:18.663345 | orchestrator | 2025-07-12 14:08:18 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:08:18.663456 | orchestrator | 2025-07-12 14:08:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:21.706785 | orchestrator | 2025-07-12 14:08:21 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:08:21.707359 | orchestrator | 2025-07-12 14:08:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:24.735136 | orchestrator | 2025-07-12 14:08:24 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:08:24.735297 | orchestrator | 2025-07-12 14:08:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:27.791461 | orchestrator | 2025-07-12 14:08:27 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:08:27.791571 | orchestrator | 2025-07-12 14:08:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:30.838197 | orchestrator | 2025-07-12 14:08:30 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:08:30.838379 | orchestrator | 2025-07-12 14:08:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:33.889274 | orchestrator | 2025-07-12 14:08:33 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:08:33.889385 | orchestrator | 2025-07-12 14:08:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:36.927725 | orchestrator | 2025-07-12 14:08:36 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:08:36.927834 | orchestrator | 2025-07-12 14:08:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:39.960161 | orchestrator | 2025-07-12 14:08:39 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:08:39.960318 | orchestrator | 2025-07-12 14:08:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:42.999847 | orchestrator | 2025-07-12 14:08:42 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:08:42.999989 | orchestrator | 2025-07-12 14:08:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:46.040341 | orchestrator | 2025-07-12 14:08:46 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:08:46.040455 | orchestrator | 2025-07-12 14:08:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:49.091099 | orchestrator | 2025-07-12 14:08:49 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:08:49.091314 | orchestrator | 2025-07-12 14:08:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:52.134071 | orchestrator | 2025-07-12 14:08:52 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:08:52.134186 | orchestrator | 2025-07-12 14:08:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:55.181556 | orchestrator | 2025-07-12 14:08:55 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:08:55.181661 | orchestrator | 2025-07-12 14:08:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:58.255167 | orchestrator | 2025-07-12 14:08:58 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:08:58.255310 | orchestrator | 2025-07-12 14:08:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:01.295085 | orchestrator | 2025-07-12 14:09:01 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:09:01.295187 | orchestrator | 2025-07-12 14:09:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:04.340862 | orchestrator | 2025-07-12 14:09:04 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:09:04.340968 | orchestrator | 2025-07-12 14:09:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:07.393864 | orchestrator | 2025-07-12 14:09:07 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:09:07.393964 | orchestrator | 2025-07-12 14:09:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:10.437781 | orchestrator | 2025-07-12 14:09:10 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:09:10.437871 | orchestrator | 2025-07-12 14:09:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:13.478555 | orchestrator | 2025-07-12 14:09:13 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:09:13.478655 | orchestrator | 2025-07-12 14:09:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:16.531262 | orchestrator | 2025-07-12 14:09:16 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:09:16.531372 | orchestrator | 2025-07-12 14:09:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:19.577083 | orchestrator | 2025-07-12 14:09:19 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:09:19.577205 | orchestrator | 2025-07-12 14:09:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:22.624978 | orchestrator | 2025-07-12 14:09:22 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:09:22.625048 | orchestrator | 2025-07-12 14:09:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:25.669965 | orchestrator | 2025-07-12 14:09:25 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:09:25.670095 | orchestrator | 2025-07-12 14:09:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:28.712044 | orchestrator | 2025-07-12 14:09:28 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:09:28.712144 | orchestrator | 2025-07-12 14:09:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:31.757638 | orchestrator | 2025-07-12 14:09:31 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:09:31.757739 | orchestrator | 2025-07-12 14:09:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:34.801571 | orchestrator | 2025-07-12 14:09:34 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:09:34.801710 | orchestrator | 2025-07-12 14:09:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:37.842453 | orchestrator | 2025-07-12 14:09:37 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:09:37.842541 | orchestrator | 2025-07-12 14:09:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:40.887457 | orchestrator | 2025-07-12 14:09:40 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:09:40.887580 | orchestrator | 2025-07-12 14:09:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:43.938733 | orchestrator | 2025-07-12 14:09:43 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:09:43.938841 | orchestrator | 2025-07-12 14:09:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:46.979382 | orchestrator | 2025-07-12 14:09:46 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:09:46.979491 | orchestrator | 2025-07-12 14:09:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:50.019548 | orchestrator | 2025-07-12 14:09:50 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:09:50.019652 | orchestrator | 2025-07-12 14:09:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:53.066589 | orchestrator | 2025-07-12 14:09:53 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:09:53.066717 | orchestrator | 2025-07-12 14:09:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:56.107722 | orchestrator | 2025-07-12 14:09:56 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:09:56.107835 | orchestrator | 2025-07-12 14:09:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:59.149773 | orchestrator | 2025-07-12 14:09:59 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:09:59.149883 | orchestrator | 2025-07-12 14:09:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:02.198951 | orchestrator | 2025-07-12 14:10:02 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:10:02.199057 | orchestrator | 2025-07-12 14:10:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:05.240436 | orchestrator | 2025-07-12 14:10:05 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:10:05.240537 | orchestrator | 2025-07-12 14:10:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:08.285392 | orchestrator | 2025-07-12 14:10:08 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:10:08.285490 | orchestrator | 2025-07-12 14:10:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:11.330557 | orchestrator | 2025-07-12 14:10:11 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:10:11.330646 | orchestrator | 2025-07-12 14:10:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:14.376109 | orchestrator | 2025-07-12 14:10:14 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:10:14.376269 | orchestrator | 2025-07-12 14:10:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:17.423732 | orchestrator | 2025-07-12 14:10:17 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:10:17.423842 | orchestrator | 2025-07-12 14:10:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:20.474802 | orchestrator | 2025-07-12 14:10:20 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:10:20.474945 | orchestrator | 2025-07-12 14:10:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:23.520441 | orchestrator | 2025-07-12 14:10:23 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:10:23.520550 | orchestrator | 2025-07-12 14:10:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:26.570358 | orchestrator | 2025-07-12 14:10:26 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:10:26.570470 | orchestrator | 2025-07-12 14:10:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:29.614685 | orchestrator | 2025-07-12 14:10:29 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:10:29.614792 | orchestrator | 2025-07-12 14:10:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:32.656743 | orchestrator | 2025-07-12 14:10:32 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:10:32.656853 | orchestrator | 2025-07-12 14:10:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:35.701621 | orchestrator | 2025-07-12 14:10:35 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:10:35.701722 | orchestrator | 2025-07-12 14:10:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:38.739048 | orchestrator | 2025-07-12 14:10:38 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:10:38.739174 | orchestrator | 2025-07-12 14:10:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:41.774391 | orchestrator | 2025-07-12 14:10:41 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:10:41.774499 | orchestrator | 2025-07-12 14:10:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:44.803710 | orchestrator | 2025-07-12 14:10:44 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:10:44.803816 | orchestrator | 2025-07-12 14:10:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:47.852717 | orchestrator | 2025-07-12 14:10:47 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:10:47.852820 | orchestrator | 2025-07-12 14:10:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:50.896454 | orchestrator | 2025-07-12 14:10:50 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:10:50.896563 | orchestrator | 2025-07-12 14:10:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:53.941936 | orchestrator | 2025-07-12 14:10:53 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:10:53.942100 | orchestrator | 2025-07-12 14:10:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:56.983145 | orchestrator | 2025-07-12 14:10:56 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:10:56.983311 | orchestrator | 2025-07-12 14:10:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:11:00.027564 | orchestrator | 2025-07-12 14:11:00 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:11:00.027670 | orchestrator | 2025-07-12 14:11:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:11:03.060707 | orchestrator | 2025-07-12 14:11:03 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:11:03.060817 | orchestrator | 2025-07-12 14:11:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:11:06.091497 | orchestrator | 2025-07-12 14:11:06 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:11:06.091637 | orchestrator | 2025-07-12 14:11:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:11:09.121798 | orchestrator | 2025-07-12 14:11:09 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:11:09.121913 | orchestrator | 2025-07-12 14:11:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:11:12.158925 | orchestrator | 2025-07-12 14:11:12 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:11:12.159030 | orchestrator | 2025-07-12 14:11:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:11:15.199914 | orchestrator | 2025-07-12 14:11:15 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:11:15.200022 | orchestrator | 2025-07-12 14:11:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:11:18.238067 | orchestrator | 2025-07-12 14:11:18 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:11:18.238242 | orchestrator | 2025-07-12 14:11:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:11:21.293283 | orchestrator | 2025-07-12 14:11:21 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:11:21.293387 | orchestrator | 2025-07-12 14:11:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:11:24.336954 | orchestrator | 2025-07-12 14:11:24 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:11:24.337089 | orchestrator | 2025-07-12 14:11:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:11:27.377605 | orchestrator | 2025-07-12 14:11:27 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:11:27.377709 | orchestrator | 2025-07-12 14:11:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:11:30.432328 | orchestrator | 2025-07-12 14:11:30 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:11:30.432443 | orchestrator | 2025-07-12 14:11:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:11:33.487872 | orchestrator | 2025-07-12 14:11:33 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:11:33.487994 | orchestrator | 2025-07-12 14:11:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:11:36.534438 | orchestrator | 2025-07-12 14:11:36 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:11:36.534573 | orchestrator | 2025-07-12 14:11:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:11:39.579922 | orchestrator | 2025-07-12 14:11:39 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:11:39.580032 | orchestrator | 2025-07-12 14:11:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:11:42.624049 | orchestrator | 2025-07-12 14:11:42 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:11:42.624869 | orchestrator | 2025-07-12 14:11:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:11:45.659877 | orchestrator | 2025-07-12 14:11:45 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:11:45.659973 | orchestrator | 2025-07-12 14:11:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:11:48.705250 | orchestrator | 2025-07-12 14:11:48 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state STARTED 2025-07-12 14:11:48.705380 | orchestrator | 2025-07-12 14:11:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:11:51.751720 | orchestrator | 2025-07-12 14:11:51 | INFO  | Task 30ae25fc-b8f4-4de6-b349-44f878cf401c is in state SUCCESS 2025-07-12 14:11:51.754067 | orchestrator | 2025-07-12 14:11:51.754119 | orchestrator | 2025-07-12 14:11:51.754133 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 14:11:51.754145 | orchestrator | 2025-07-12 14:11:51.754184 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-07-12 14:11:51.754204 | orchestrator | Saturday 12 July 2025 14:03:11 +0000 (0:00:00.801) 0:00:00.801 ********* 2025-07-12 14:11:51.754216 | orchestrator | changed: [testbed-manager] 2025-07-12 14:11:51.754228 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:51.754244 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:11:51.754262 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:11:51.754280 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:11:51.754297 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:11:51.754315 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:11:51.754333 | orchestrator | 2025-07-12 14:11:51.754352 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 14:11:51.754370 | orchestrator | Saturday 12 July 2025 14:03:12 +0000 (0:00:01.382) 0:00:02.183 ********* 2025-07-12 14:11:51.754389 | orchestrator | changed: [testbed-manager] 2025-07-12 14:11:51.754407 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:51.754425 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:11:51.754443 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:11:51.754461 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:11:51.754479 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:11:51.754497 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:11:51.754515 | orchestrator | 2025-07-12 14:11:51.754532 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 14:11:51.754544 | orchestrator | Saturday 12 July 2025 14:03:13 +0000 (0:00:00.791) 0:00:02.975 ********* 2025-07-12 14:11:51.754554 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-07-12 14:11:51.754565 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-07-12 14:11:51.754576 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-07-12 14:11:51.754587 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-07-12 14:11:51.754599 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-07-12 14:11:51.754611 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-07-12 14:11:51.754623 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-07-12 14:11:51.754635 | orchestrator | 2025-07-12 14:11:51.754648 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-07-12 14:11:51.754660 | orchestrator | 2025-07-12 14:11:51.754672 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-07-12 14:11:51.754684 | orchestrator | Saturday 12 July 2025 14:03:14 +0000 (0:00:00.949) 0:00:03.924 ********* 2025-07-12 14:11:51.754697 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:11:51.754709 | orchestrator | 2025-07-12 14:11:51.754721 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-07-12 14:11:51.754733 | orchestrator | Saturday 12 July 2025 14:03:16 +0000 (0:00:01.381) 0:00:05.305 ********* 2025-07-12 14:11:51.754746 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-07-12 14:11:51.754759 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-07-12 14:11:51.754771 | orchestrator | 2025-07-12 14:11:51.754783 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-07-12 14:11:51.754796 | orchestrator | Saturday 12 July 2025 14:03:20 +0000 (0:00:04.358) 0:00:09.664 ********* 2025-07-12 14:11:51.754808 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 14:11:51.754820 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 14:11:51.754832 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:51.754844 | orchestrator | 2025-07-12 14:11:51.754856 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-07-12 14:11:51.754887 | orchestrator | Saturday 12 July 2025 14:03:24 +0000 (0:00:04.340) 0:00:14.004 ********* 2025-07-12 14:11:51.754898 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:51.754908 | orchestrator | 2025-07-12 14:11:51.754919 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-07-12 14:11:51.754930 | orchestrator | Saturday 12 July 2025 14:03:26 +0000 (0:00:01.315) 0:00:15.320 ********* 2025-07-12 14:11:51.754940 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:51.754951 | orchestrator | 2025-07-12 14:11:51.754962 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-07-12 14:11:51.754973 | orchestrator | Saturday 12 July 2025 14:03:27 +0000 (0:00:01.496) 0:00:16.816 ********* 2025-07-12 14:11:51.754984 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:51.754994 | orchestrator | 2025-07-12 14:11:51.755020 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-12 14:11:51.755031 | orchestrator | Saturday 12 July 2025 14:03:31 +0000 (0:00:04.193) 0:00:21.009 ********* 2025-07-12 14:11:51.755042 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.755052 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.755063 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.755074 | orchestrator | 2025-07-12 14:11:51.755093 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-07-12 14:11:51.755111 | orchestrator | Saturday 12 July 2025 14:03:32 +0000 (0:00:00.532) 0:00:21.542 ********* 2025-07-12 14:11:51.755129 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:11:51.755149 | orchestrator | 2025-07-12 14:11:51.755196 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-07-12 14:11:51.755218 | orchestrator | Saturday 12 July 2025 14:04:08 +0000 (0:00:35.992) 0:00:57.534 ********* 2025-07-12 14:11:51.755238 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:51.755258 | orchestrator | 2025-07-12 14:11:51.755279 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-07-12 14:11:51.755299 | orchestrator | Saturday 12 July 2025 14:04:22 +0000 (0:00:13.949) 0:01:11.483 ********* 2025-07-12 14:11:51.755319 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:11:51.755338 | orchestrator | 2025-07-12 14:11:51.755359 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-07-12 14:11:51.755381 | orchestrator | Saturday 12 July 2025 14:04:33 +0000 (0:00:11.765) 0:01:23.249 ********* 2025-07-12 14:11:51.755416 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:11:51.755428 | orchestrator | 2025-07-12 14:11:51.755439 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-07-12 14:11:51.755450 | orchestrator | Saturday 12 July 2025 14:04:35 +0000 (0:00:01.173) 0:01:24.422 ********* 2025-07-12 14:11:51.755461 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.755471 | orchestrator | 2025-07-12 14:11:51.755482 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-12 14:11:51.755493 | orchestrator | Saturday 12 July 2025 14:04:35 +0000 (0:00:00.627) 0:01:25.050 ********* 2025-07-12 14:11:51.755504 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:11:51.755514 | orchestrator | 2025-07-12 14:11:51.755525 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-07-12 14:11:51.755536 | orchestrator | Saturday 12 July 2025 14:04:36 +0000 (0:00:00.858) 0:01:25.908 ********* 2025-07-12 14:11:51.755547 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:11:51.755558 | orchestrator | 2025-07-12 14:11:51.755568 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-07-12 14:11:51.755579 | orchestrator | Saturday 12 July 2025 14:04:54 +0000 (0:00:18.166) 0:01:44.075 ********* 2025-07-12 14:11:51.755590 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.755600 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.755611 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.755621 | orchestrator | 2025-07-12 14:11:51.755632 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-07-12 14:11:51.755654 | orchestrator | 2025-07-12 14:11:51.755665 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-07-12 14:11:51.755676 | orchestrator | Saturday 12 July 2025 14:04:55 +0000 (0:00:00.339) 0:01:44.414 ********* 2025-07-12 14:11:51.755687 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:11:51.755697 | orchestrator | 2025-07-12 14:11:51.755708 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-07-12 14:11:51.755719 | orchestrator | Saturday 12 July 2025 14:04:55 +0000 (0:00:00.577) 0:01:44.992 ********* 2025-07-12 14:11:51.755729 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.755740 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.755751 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:51.755761 | orchestrator | 2025-07-12 14:11:51.755772 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-07-12 14:11:51.755783 | orchestrator | Saturday 12 July 2025 14:04:57 +0000 (0:00:01.716) 0:01:46.708 ********* 2025-07-12 14:11:51.755793 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.755804 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.755814 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:51.755825 | orchestrator | 2025-07-12 14:11:51.755836 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-07-12 14:11:51.755847 | orchestrator | Saturday 12 July 2025 14:04:59 +0000 (0:00:02.055) 0:01:48.764 ********* 2025-07-12 14:11:51.755857 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.755868 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.755878 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.755889 | orchestrator | 2025-07-12 14:11:51.755900 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-07-12 14:11:51.755910 | orchestrator | Saturday 12 July 2025 14:04:59 +0000 (0:00:00.329) 0:01:49.094 ********* 2025-07-12 14:11:51.755921 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-12 14:11:51.755931 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.755942 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-12 14:11:51.755952 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.755963 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-07-12 14:11:51.755974 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-07-12 14:11:51.755985 | orchestrator | 2025-07-12 14:11:51.755996 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-07-12 14:11:51.756006 | orchestrator | Saturday 12 July 2025 14:05:06 +0000 (0:00:07.162) 0:01:56.256 ********* 2025-07-12 14:11:51.756017 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.756028 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.756038 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.756048 | orchestrator | 2025-07-12 14:11:51.756060 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-07-12 14:11:51.756079 | orchestrator | Saturday 12 July 2025 14:05:07 +0000 (0:00:00.345) 0:01:56.602 ********* 2025-07-12 14:11:51.756107 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-07-12 14:11:51.756126 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.756145 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-12 14:11:51.756195 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.756216 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-12 14:11:51.756236 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.756256 | orchestrator | 2025-07-12 14:11:51.756276 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-07-12 14:11:51.756296 | orchestrator | Saturday 12 July 2025 14:05:07 +0000 (0:00:00.617) 0:01:57.219 ********* 2025-07-12 14:11:51.756319 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.756339 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:51.756360 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.756381 | orchestrator | 2025-07-12 14:11:51.756406 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-07-12 14:11:51.756417 | orchestrator | Saturday 12 July 2025 14:05:08 +0000 (0:00:00.573) 0:01:57.793 ********* 2025-07-12 14:11:51.756428 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.756438 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.756448 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:51.756459 | orchestrator | 2025-07-12 14:11:51.756470 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-07-12 14:11:51.756480 | orchestrator | Saturday 12 July 2025 14:05:09 +0000 (0:00:00.999) 0:01:58.792 ********* 2025-07-12 14:11:51.756491 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.756501 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.756521 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:51.756532 | orchestrator | 2025-07-12 14:11:51.756542 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-07-12 14:11:51.756553 | orchestrator | Saturday 12 July 2025 14:05:11 +0000 (0:00:02.116) 0:02:00.909 ********* 2025-07-12 14:11:51.756586 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.756597 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.756607 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:11:51.756618 | orchestrator | 2025-07-12 14:11:51.756629 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-07-12 14:11:51.756640 | orchestrator | Saturday 12 July 2025 14:05:33 +0000 (0:00:21.537) 0:02:22.447 ********* 2025-07-12 14:11:51.756651 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.756661 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.756672 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:11:51.756682 | orchestrator | 2025-07-12 14:11:51.756693 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-07-12 14:11:51.756703 | orchestrator | Saturday 12 July 2025 14:05:44 +0000 (0:00:11.725) 0:02:34.172 ********* 2025-07-12 14:11:51.756714 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:11:51.756724 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.756735 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.756745 | orchestrator | 2025-07-12 14:11:51.756756 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-07-12 14:11:51.756767 | orchestrator | Saturday 12 July 2025 14:05:45 +0000 (0:00:00.892) 0:02:35.064 ********* 2025-07-12 14:11:51.756777 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.756788 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.756802 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:51.756816 | orchestrator | 2025-07-12 14:11:51.756827 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-07-12 14:11:51.756838 | orchestrator | Saturday 12 July 2025 14:05:57 +0000 (0:00:11.321) 0:02:46.386 ********* 2025-07-12 14:11:51.756848 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.756859 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.756869 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.756880 | orchestrator | 2025-07-12 14:11:51.756890 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-07-12 14:11:51.756901 | orchestrator | Saturday 12 July 2025 14:05:58 +0000 (0:00:01.467) 0:02:47.854 ********* 2025-07-12 14:11:51.756911 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.756922 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.756932 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.756943 | orchestrator | 2025-07-12 14:11:51.756953 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-07-12 14:11:51.756964 | orchestrator | 2025-07-12 14:11:51.756974 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-12 14:11:51.756985 | orchestrator | Saturday 12 July 2025 14:05:58 +0000 (0:00:00.328) 0:02:48.182 ********* 2025-07-12 14:11:51.756995 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:11:51.757007 | orchestrator | 2025-07-12 14:11:51.757115 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-07-12 14:11:51.757128 | orchestrator | Saturday 12 July 2025 14:05:59 +0000 (0:00:00.551) 0:02:48.734 ********* 2025-07-12 14:11:51.757139 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-07-12 14:11:51.757150 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-07-12 14:11:51.757227 | orchestrator | 2025-07-12 14:11:51.757240 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-07-12 14:11:51.757251 | orchestrator | Saturday 12 July 2025 14:06:02 +0000 (0:00:03.070) 0:02:51.804 ********* 2025-07-12 14:11:51.757262 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-07-12 14:11:51.757274 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-07-12 14:11:51.757285 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-07-12 14:11:51.757296 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-07-12 14:11:51.757307 | orchestrator | 2025-07-12 14:11:51.757317 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-07-12 14:11:51.757335 | orchestrator | Saturday 12 July 2025 14:06:09 +0000 (0:00:06.577) 0:02:58.381 ********* 2025-07-12 14:11:51.757346 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 14:11:51.757356 | orchestrator | 2025-07-12 14:11:51.757367 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-07-12 14:11:51.757378 | orchestrator | Saturday 12 July 2025 14:06:12 +0000 (0:00:03.259) 0:03:01.641 ********* 2025-07-12 14:11:51.757389 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 14:11:51.757399 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-07-12 14:11:51.757410 | orchestrator | 2025-07-12 14:11:51.757421 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-07-12 14:11:51.757431 | orchestrator | Saturday 12 July 2025 14:06:16 +0000 (0:00:03.757) 0:03:05.399 ********* 2025-07-12 14:11:51.757441 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 14:11:51.757452 | orchestrator | 2025-07-12 14:11:51.757463 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-07-12 14:11:51.757473 | orchestrator | Saturday 12 July 2025 14:06:19 +0000 (0:00:03.200) 0:03:08.599 ********* 2025-07-12 14:11:51.757484 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-07-12 14:11:51.757495 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-07-12 14:11:51.757505 | orchestrator | 2025-07-12 14:11:51.757516 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-07-12 14:11:51.757547 | orchestrator | Saturday 12 July 2025 14:06:26 +0000 (0:00:07.322) 0:03:15.921 ********* 2025-07-12 14:11:51.757565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 14:11:51.757592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 14:11:51.757612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 14:11:51.757635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.757649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.757660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.757679 | orchestrator | 2025-07-12 14:11:51.757690 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-07-12 14:11:51.757701 | orchestrator | Saturday 12 July 2025 14:06:27 +0000 (0:00:01.218) 0:03:17.140 ********* 2025-07-12 14:11:51.757711 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.757722 | orchestrator | 2025-07-12 14:11:51.757733 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-07-12 14:11:51.757743 | orchestrator | Saturday 12 July 2025 14:06:28 +0000 (0:00:00.140) 0:03:17.280 ********* 2025-07-12 14:11:51.757753 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.757762 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.757771 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.757781 | orchestrator | 2025-07-12 14:11:51.757791 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-07-12 14:11:51.757800 | orchestrator | Saturday 12 July 2025 14:06:28 +0000 (0:00:00.519) 0:03:17.800 ********* 2025-07-12 14:11:51.757809 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 14:11:51.757819 | orchestrator | 2025-07-12 14:11:51.757828 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-07-12 14:11:51.757838 | orchestrator | Saturday 12 July 2025 14:06:29 +0000 (0:00:00.685) 0:03:18.485 ********* 2025-07-12 14:11:51.757847 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.757856 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.757866 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.757875 | orchestrator | 2025-07-12 14:11:51.757884 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-12 14:11:51.757894 | orchestrator | Saturday 12 July 2025 14:06:29 +0000 (0:00:00.304) 0:03:18.790 ********* 2025-07-12 14:11:51.757903 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:11:51.757912 | orchestrator | 2025-07-12 14:11:51.757922 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-07-12 14:11:51.757931 | orchestrator | Saturday 12 July 2025 14:06:30 +0000 (0:00:00.705) 0:03:19.496 ********* 2025-07-12 14:11:51.757952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 14:11:51.757964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 14:11:51.757982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 14:11:51.757997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.758007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.758066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.758093 | orchestrator | 2025-07-12 14:11:51.758103 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-07-12 14:11:51.758112 | orchestrator | Saturday 12 July 2025 14:06:32 +0000 (0:00:02.311) 0:03:21.807 ********* 2025-07-12 14:11:51.758123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 14:11:51.758134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:11:51.758144 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.758182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 14:11:51.758211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:11:51.758241 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.758253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 14:11:51.758264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:11:51.758274 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.758283 | orchestrator | 2025-07-12 14:11:51.758300 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-07-12 14:11:51.758313 | orchestrator | Saturday 12 July 2025 14:06:33 +0000 (0:00:00.574) 0:03:22.381 ********* 2025-07-12 14:11:51.758348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 14:11:51.758368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:11:51.758396 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.758432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'2025-07-12 14:11:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:11:51.758446 | orchestrator | container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 14:11:51.758457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:11:51.758467 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.758482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 14:11:51.758493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:11:51.758510 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.758519 | orchestrator | 2025-07-12 14:11:51.758529 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-07-12 14:11:51.758539 | orchestrator | Saturday 12 July 2025 14:06:34 +0000 (0:00:00.939) 0:03:23.321 ********* 2025-07-12 14:11:51.758557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 14:11:51.758569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 14:11:51.758585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 14:11:51.758609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.758620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.758630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.758640 | orchestrator | 2025-07-12 14:11:51.758649 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-07-12 14:11:51.758659 | orchestrator | Saturday 12 July 2025 14:06:36 +0000 (0:00:02.210) 0:03:25.532 ********* 2025-07-12 14:11:51.758669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 14:11:51.758684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 14:11:51.758709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 14:11:51.758720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.758730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.758740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.758756 | orchestrator | 2025-07-12 14:11:51.758766 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-07-12 14:11:51.758776 | orchestrator | Saturday 12 July 2025 14:06:41 +0000 (0:00:05.654) 0:03:31.187 ********* 2025-07-12 14:11:51.758797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 14:11:51.758809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:11:51.758819 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.758829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 14:11:51.758839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:11:51.758855 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.758869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 14:11:51.758887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:11:51.758897 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.758906 | orchestrator | 2025-07-12 14:11:51.758916 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-07-12 14:11:51.758926 | orchestrator | Saturday 12 July 2025 14:06:42 +0000 (0:00:00.584) 0:03:31.771 ********* 2025-07-12 14:11:51.758935 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:51.758944 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:11:51.758954 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:11:51.758963 | orchestrator | 2025-07-12 14:11:51.758973 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-07-12 14:11:51.758982 | orchestrator | Saturday 12 July 2025 14:06:44 +0000 (0:00:01.994) 0:03:33.766 ********* 2025-07-12 14:11:51.758992 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.759001 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.759010 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.759019 | orchestrator | 2025-07-12 14:11:51.759029 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-07-12 14:11:51.759038 | orchestrator | Saturday 12 July 2025 14:06:44 +0000 (0:00:00.331) 0:03:34.097 ********* 2025-07-12 14:11:51.759048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 14:11:51.759072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 14:11:51.759091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 14:11:51.759106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.759123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.759148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.759187 | orchestrator | 2025-07-12 14:11:51.759204 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-07-12 14:11:51.759218 | orchestrator | Saturday 12 July 2025 14:06:46 +0000 (0:00:01.852) 0:03:35.950 ********* 2025-07-12 14:11:51.759234 | orchestrator | 2025-07-12 14:11:51.759249 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-07-12 14:11:51.759264 | orchestrator | Saturday 12 July 2025 14:06:46 +0000 (0:00:00.135) 0:03:36.086 ********* 2025-07-12 14:11:51.759279 | orchestrator | 2025-07-12 14:11:51.759303 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-07-12 14:11:51.759319 | orchestrator | Saturday 12 July 2025 14:06:46 +0000 (0:00:00.140) 0:03:36.226 ********* 2025-07-12 14:11:51.759335 | orchestrator | 2025-07-12 14:11:51.759350 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-07-12 14:11:51.759367 | orchestrator | Saturday 12 July 2025 14:06:47 +0000 (0:00:00.295) 0:03:36.522 ********* 2025-07-12 14:11:51.759384 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:51.759400 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:11:51.759415 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:11:51.759424 | orchestrator | 2025-07-12 14:11:51.759434 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-07-12 14:11:51.759443 | orchestrator | Saturday 12 July 2025 14:07:09 +0000 (0:00:22.731) 0:03:59.254 ********* 2025-07-12 14:11:51.759452 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:51.759462 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:11:51.759471 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:11:51.759480 | orchestrator | 2025-07-12 14:11:51.759490 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-07-12 14:11:51.759499 | orchestrator | 2025-07-12 14:11:51.759508 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-12 14:11:51.759518 | orchestrator | Saturday 12 July 2025 14:07:21 +0000 (0:00:11.136) 0:04:10.390 ********* 2025-07-12 14:11:51.759536 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:11:51.759546 | orchestrator | 2025-07-12 14:11:51.759555 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-12 14:11:51.759565 | orchestrator | Saturday 12 July 2025 14:07:22 +0000 (0:00:01.198) 0:04:11.589 ********* 2025-07-12 14:11:51.759574 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:51.759583 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:51.759593 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:51.759602 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.759611 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.759620 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.759630 | orchestrator | 2025-07-12 14:11:51.759639 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-07-12 14:11:51.759649 | orchestrator | Saturday 12 July 2025 14:07:23 +0000 (0:00:00.766) 0:04:12.356 ********* 2025-07-12 14:11:51.759658 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.759668 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.759685 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.759695 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 14:11:51.759704 | orchestrator | 2025-07-12 14:11:51.759714 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-07-12 14:11:51.759723 | orchestrator | Saturday 12 July 2025 14:07:24 +0000 (0:00:00.959) 0:04:13.315 ********* 2025-07-12 14:11:51.759733 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-07-12 14:11:51.759742 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-07-12 14:11:51.759751 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-07-12 14:11:51.759761 | orchestrator | 2025-07-12 14:11:51.759770 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-07-12 14:11:51.759780 | orchestrator | Saturday 12 July 2025 14:07:24 +0000 (0:00:00.701) 0:04:14.017 ********* 2025-07-12 14:11:51.759789 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-07-12 14:11:51.759798 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-07-12 14:11:51.759808 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-07-12 14:11:51.759817 | orchestrator | 2025-07-12 14:11:51.759827 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-07-12 14:11:51.759836 | orchestrator | Saturday 12 July 2025 14:07:25 +0000 (0:00:01.242) 0:04:15.259 ********* 2025-07-12 14:11:51.759845 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-07-12 14:11:51.759855 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:51.759864 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-07-12 14:11:51.759873 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:51.759882 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-07-12 14:11:51.759892 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:51.759901 | orchestrator | 2025-07-12 14:11:51.759910 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-07-12 14:11:51.759920 | orchestrator | Saturday 12 July 2025 14:07:26 +0000 (0:00:00.722) 0:04:15.982 ********* 2025-07-12 14:11:51.759929 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 14:11:51.759939 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 14:11:51.759948 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.759958 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 14:11:51.759967 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 14:11:51.759976 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.759986 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 14:11:51.759995 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 14:11:51.760005 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.760014 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-07-12 14:11:51.760023 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-07-12 14:11:51.760033 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-07-12 14:11:51.760042 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-07-12 14:11:51.760056 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-07-12 14:11:51.760066 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-07-12 14:11:51.760075 | orchestrator | 2025-07-12 14:11:51.760085 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-07-12 14:11:51.760094 | orchestrator | Saturday 12 July 2025 14:07:28 +0000 (0:00:02.043) 0:04:18.025 ********* 2025-07-12 14:11:51.760103 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.760113 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.760129 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.760138 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:11:51.760148 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:11:51.760190 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:11:51.760202 | orchestrator | 2025-07-12 14:11:51.760212 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-07-12 14:11:51.760222 | orchestrator | Saturday 12 July 2025 14:07:30 +0000 (0:00:01.368) 0:04:19.394 ********* 2025-07-12 14:11:51.760231 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.760241 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.760250 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.760259 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:11:51.760269 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:11:51.760278 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:11:51.760288 | orchestrator | 2025-07-12 14:11:51.760297 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-07-12 14:11:51.760312 | orchestrator | Saturday 12 July 2025 14:07:31 +0000 (0:00:01.538) 0:04:20.933 ********* 2025-07-12 14:11:51.760323 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 14:11:51.760335 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 14:11:51.760346 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 14:11:51.760361 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 14:11:51.760383 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 14:11:51.760402 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 14:11:51.760412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 14:11:51.760424 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.760435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 14:11:51.760449 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.760465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 14:11:51.760481 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.760492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.760501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.760511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.760521 | orchestrator | 2025-07-12 14:11:51.760531 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-12 14:11:51.760541 | orchestrator | Saturday 12 July 2025 14:07:34 +0000 (0:00:02.639) 0:04:23.573 ********* 2025-07-12 14:11:51.760557 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:11:51.760568 | orchestrator | 2025-07-12 14:11:51.760578 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-07-12 14:11:51.760588 | orchestrator | Saturday 12 July 2025 14:07:35 +0000 (0:00:01.233) 0:04:24.807 ********* 2025-07-12 14:11:51.760602 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 14:11:51.760619 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 14:11:51.760633 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 14:11:51.760650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 14:11:51.760668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 14:11:51.760702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 14:11:51.760720 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 14:11:51.760745 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 14:11:51.760764 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 14:11:51.760782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.760800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.760830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.760854 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.760882 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.760901 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.760918 | orchestrator | 2025-07-12 14:11:51.760936 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-07-12 14:11:51.760953 | orchestrator | Saturday 12 July 2025 14:07:39 +0000 (0:00:03.779) 0:04:28.586 ********* 2025-07-12 14:11:51.760970 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 14:11:51.760998 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 14:11:51.761021 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 14:11:51.761048 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 14:11:51.761066 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 14:11:51.761082 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 14:11:51.761109 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:51.761127 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:51.761145 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 14:11:51.761229 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 14:11:51.761259 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 14:11:51.761273 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:51.761372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 14:11:51.761384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:11:51.761394 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.761404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 14:11:51.761423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:11:51.761433 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.761447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 14:11:51.761457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:11:51.761467 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.761477 | orchestrator | 2025-07-12 14:11:51.761486 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-07-12 14:11:51.761496 | orchestrator | Saturday 12 July 2025 14:07:41 +0000 (0:00:01.777) 0:04:30.363 ********* 2025-07-12 14:11:51.761512 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 14:11:51.761523 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 14:11:51.761539 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 14:11:51.761548 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 14:11:51.761563 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 14:11:51.761573 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:51.761588 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 14:11:51.761598 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:51.761608 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 14:11:51.761624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 14:11:51.761634 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 14:11:51.761643 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:51.761658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 14:11:51.761668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:11:51.761678 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.761692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 14:11:51.761700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:11:51.761713 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.761722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 14:11:51.761730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:11:51.761738 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.761746 | orchestrator | 2025-07-12 14:11:51.761756 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-12 14:11:51.761770 | orchestrator | Saturday 12 July 2025 14:07:43 +0000 (0:00:01.930) 0:04:32.294 ********* 2025-07-12 14:11:51.761785 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.761799 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.761814 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.761826 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 14:11:51.761834 | orchestrator | 2025-07-12 14:11:51.761841 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-07-12 14:11:51.761849 | orchestrator | Saturday 12 July 2025 14:07:43 +0000 (0:00:00.962) 0:04:33.256 ********* 2025-07-12 14:11:51.761857 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-12 14:11:51.761868 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-12 14:11:51.761876 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-12 14:11:51.761884 | orchestrator | 2025-07-12 14:11:51.761892 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-07-12 14:11:51.761899 | orchestrator | Saturday 12 July 2025 14:07:45 +0000 (0:00:01.189) 0:04:34.446 ********* 2025-07-12 14:11:51.761907 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-12 14:11:51.761915 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-12 14:11:51.761922 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-12 14:11:51.761930 | orchestrator | 2025-07-12 14:11:51.761938 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-07-12 14:11:51.761946 | orchestrator | Saturday 12 July 2025 14:07:46 +0000 (0:00:01.028) 0:04:35.474 ********* 2025-07-12 14:11:51.761953 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:11:51.761961 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:11:51.761969 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:11:51.761976 | orchestrator | 2025-07-12 14:11:51.761984 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-07-12 14:11:51.761992 | orchestrator | Saturday 12 July 2025 14:07:46 +0000 (0:00:00.554) 0:04:36.029 ********* 2025-07-12 14:11:51.762006 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:11:51.762013 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:11:51.762061 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:11:51.762069 | orchestrator | 2025-07-12 14:11:51.762077 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-07-12 14:11:51.762085 | orchestrator | Saturday 12 July 2025 14:07:47 +0000 (0:00:00.544) 0:04:36.574 ********* 2025-07-12 14:11:51.762092 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-07-12 14:11:51.762106 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-07-12 14:11:51.762114 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-07-12 14:11:51.762122 | orchestrator | 2025-07-12 14:11:51.762130 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-07-12 14:11:51.762138 | orchestrator | Saturday 12 July 2025 14:07:48 +0000 (0:00:01.416) 0:04:37.991 ********* 2025-07-12 14:11:51.762145 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-07-12 14:11:51.762153 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-07-12 14:11:51.762182 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-07-12 14:11:51.762193 | orchestrator | 2025-07-12 14:11:51.762201 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-07-12 14:11:51.762209 | orchestrator | Saturday 12 July 2025 14:07:49 +0000 (0:00:01.260) 0:04:39.251 ********* 2025-07-12 14:11:51.762217 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-07-12 14:11:51.762224 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-07-12 14:11:51.762232 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-07-12 14:11:51.762240 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-07-12 14:11:51.762247 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-07-12 14:11:51.762255 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-07-12 14:11:51.762263 | orchestrator | 2025-07-12 14:11:51.762271 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-07-12 14:11:51.762278 | orchestrator | Saturday 12 July 2025 14:07:53 +0000 (0:00:03.767) 0:04:43.019 ********* 2025-07-12 14:11:51.762286 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:51.762294 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:51.762302 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:51.762309 | orchestrator | 2025-07-12 14:11:51.762317 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-07-12 14:11:51.762325 | orchestrator | Saturday 12 July 2025 14:07:54 +0000 (0:00:00.295) 0:04:43.315 ********* 2025-07-12 14:11:51.762332 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:51.762340 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:51.762348 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:51.762355 | orchestrator | 2025-07-12 14:11:51.762363 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-07-12 14:11:51.762371 | orchestrator | Saturday 12 July 2025 14:07:54 +0000 (0:00:00.495) 0:04:43.810 ********* 2025-07-12 14:11:51.762379 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:11:51.762386 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:11:51.762394 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:11:51.762402 | orchestrator | 2025-07-12 14:11:51.762409 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-07-12 14:11:51.762417 | orchestrator | Saturday 12 July 2025 14:07:55 +0000 (0:00:01.312) 0:04:45.123 ********* 2025-07-12 14:11:51.762425 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-07-12 14:11:51.762434 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-07-12 14:11:51.762442 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-07-12 14:11:51.762459 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-07-12 14:11:51.762467 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-07-12 14:11:51.762474 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-07-12 14:11:51.762482 | orchestrator | 2025-07-12 14:11:51.762490 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-07-12 14:11:51.762502 | orchestrator | Saturday 12 July 2025 14:07:59 +0000 (0:00:03.217) 0:04:48.341 ********* 2025-07-12 14:11:51.762510 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 14:11:51.762518 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 14:11:51.762525 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 14:11:51.762533 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 14:11:51.762541 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:11:51.762548 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 14:11:51.762556 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:11:51.762564 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 14:11:51.762571 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:11:51.762579 | orchestrator | 2025-07-12 14:11:51.762587 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-07-12 14:11:51.762594 | orchestrator | Saturday 12 July 2025 14:08:02 +0000 (0:00:03.460) 0:04:51.801 ********* 2025-07-12 14:11:51.762602 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:51.762610 | orchestrator | 2025-07-12 14:11:51.762617 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-07-12 14:11:51.762625 | orchestrator | Saturday 12 July 2025 14:08:02 +0000 (0:00:00.135) 0:04:51.936 ********* 2025-07-12 14:11:51.762633 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:51.762641 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:51.762648 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:51.762656 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.762664 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.762671 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.762679 | orchestrator | 2025-07-12 14:11:51.762687 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-07-12 14:11:51.762705 | orchestrator | Saturday 12 July 2025 14:08:03 +0000 (0:00:00.822) 0:04:52.759 ********* 2025-07-12 14:11:51.762713 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-12 14:11:51.762721 | orchestrator | 2025-07-12 14:11:51.762729 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-07-12 14:11:51.762736 | orchestrator | Saturday 12 July 2025 14:08:04 +0000 (0:00:00.677) 0:04:53.436 ********* 2025-07-12 14:11:51.762744 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:51.762752 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:51.762759 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:51.762767 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.762774 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.762782 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.762793 | orchestrator | 2025-07-12 14:11:51.762807 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-07-12 14:11:51.762819 | orchestrator | Saturday 12 July 2025 14:08:04 +0000 (0:00:00.556) 0:04:53.993 ********* 2025-07-12 14:11:51.762833 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 14:11:51.762856 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 14:11:51.762875 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 14:11:51.762889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 14:11:51.762911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 14:11:51.762926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 14:11:51.762949 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 14:11:51.762963 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 14:11:51.762977 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 14:11:51.762996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.763011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.763032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.763045 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.763060 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.763072 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.763080 | orchestrator | 2025-07-12 14:11:51.763088 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-07-12 14:11:51.763096 | orchestrator | Saturday 12 July 2025 14:08:08 +0000 (0:00:04.117) 0:04:58.111 ********* 2025-07-12 14:11:51.763105 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 14:11:51.763117 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 14:11:51.763131 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 14:11:51.763139 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 14:11:51.763147 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 14:11:51.763180 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 14:11:51.763204 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.763219 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.763245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 14:11:51.763260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 14:11:51.763269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 14:11:51.763282 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.763295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.763310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.763318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.763326 | orchestrator | 2025-07-12 14:11:51.763334 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-07-12 14:11:51.763342 | orchestrator | Saturday 12 July 2025 14:08:14 +0000 (0:00:06.084) 0:05:04.195 ********* 2025-07-12 14:11:51.763349 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:51.763357 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:51.763365 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:51.763372 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.763384 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.763397 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.763411 | orchestrator | 2025-07-12 14:11:51.763424 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-07-12 14:11:51.763438 | orchestrator | Saturday 12 July 2025 14:08:16 +0000 (0:00:01.552) 0:05:05.747 ********* 2025-07-12 14:11:51.763451 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-07-12 14:11:51.763465 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-07-12 14:11:51.763479 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-07-12 14:11:51.763492 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-07-12 14:11:51.763506 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-07-12 14:11:51.763520 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-07-12 14:11:51.763533 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-07-12 14:11:51.763548 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.763562 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-07-12 14:11:51.763574 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.763587 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-07-12 14:11:51.763600 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.763620 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-07-12 14:11:51.763635 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-07-12 14:11:51.763643 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-07-12 14:11:51.763651 | orchestrator | 2025-07-12 14:11:51.763658 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-07-12 14:11:51.763673 | orchestrator | Saturday 12 July 2025 14:08:20 +0000 (0:00:03.617) 0:05:09.364 ********* 2025-07-12 14:11:51.763681 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:51.763689 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:51.763697 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:51.763705 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.763713 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.763720 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.763728 | orchestrator | 2025-07-12 14:11:51.763736 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-07-12 14:11:51.763744 | orchestrator | Saturday 12 July 2025 14:08:20 +0000 (0:00:00.824) 0:05:10.189 ********* 2025-07-12 14:11:51.763752 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-07-12 14:11:51.763760 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-07-12 14:11:51.763773 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-07-12 14:11:51.763781 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-07-12 14:11:51.763789 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-07-12 14:11:51.763797 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-07-12 14:11:51.763804 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-07-12 14:11:51.763812 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-07-12 14:11:51.763819 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-07-12 14:11:51.763827 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-07-12 14:11:51.763835 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.763842 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-07-12 14:11:51.763850 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.763858 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-07-12 14:11:51.763865 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.763873 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-07-12 14:11:51.763881 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-07-12 14:11:51.763888 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-07-12 14:11:51.763896 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-07-12 14:11:51.763903 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-07-12 14:11:51.763911 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-07-12 14:11:51.763919 | orchestrator | 2025-07-12 14:11:51.763926 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-07-12 14:11:51.763934 | orchestrator | Saturday 12 July 2025 14:08:26 +0000 (0:00:05.515) 0:05:15.704 ********* 2025-07-12 14:11:51.763942 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-07-12 14:11:51.763950 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-07-12 14:11:51.763966 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-07-12 14:11:51.763974 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-07-12 14:11:51.763981 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-07-12 14:11:51.763989 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-12 14:11:51.763997 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-12 14:11:51.764004 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-12 14:11:51.764012 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-07-12 14:11:51.764024 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-12 14:11:51.764032 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-12 14:11:51.764040 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-12 14:11:51.764047 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-07-12 14:11:51.764055 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.764063 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-12 14:11:51.764070 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-07-12 14:11:51.764078 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.764086 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-07-12 14:11:51.764094 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.764101 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-12 14:11:51.764109 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-12 14:11:51.764117 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-12 14:11:51.764125 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-12 14:11:51.764136 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-12 14:11:51.764144 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-12 14:11:51.764152 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-12 14:11:51.764208 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-12 14:11:51.764217 | orchestrator | 2025-07-12 14:11:51.764225 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-07-12 14:11:51.764233 | orchestrator | Saturday 12 July 2025 14:08:33 +0000 (0:00:07.041) 0:05:22.746 ********* 2025-07-12 14:11:51.764240 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:51.764248 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:51.764256 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:51.764263 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.764271 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.764279 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.764286 | orchestrator | 2025-07-12 14:11:51.764294 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-07-12 14:11:51.764301 | orchestrator | Saturday 12 July 2025 14:08:34 +0000 (0:00:00.547) 0:05:23.293 ********* 2025-07-12 14:11:51.764308 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:51.764314 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:51.764321 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:51.764327 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.764334 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.764345 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.764352 | orchestrator | 2025-07-12 14:11:51.764358 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-07-12 14:11:51.764365 | orchestrator | Saturday 12 July 2025 14:08:34 +0000 (0:00:00.788) 0:05:24.082 ********* 2025-07-12 14:11:51.764371 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.764378 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.764384 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:11:51.764391 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.764397 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:11:51.764404 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:11:51.764410 | orchestrator | 2025-07-12 14:11:51.764417 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-07-12 14:11:51.764423 | orchestrator | Saturday 12 July 2025 14:08:36 +0000 (0:00:01.855) 0:05:25.938 ********* 2025-07-12 14:11:51.764431 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 14:11:51.764442 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 14:11:51.764450 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 14:11:51.764457 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:51.764468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 14:11:51.764480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:11:51.764487 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.764494 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 14:11:51.764501 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 14:11:51.764511 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 14:11:51.764518 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:51.764529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 14:11:51.764536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:11:51.764548 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.764555 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 14:11:51.764562 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 14:11:51.764569 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 14:11:51.764579 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:51.764586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 14:11:51.764597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:11:51.764608 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.764615 | orchestrator | 2025-07-12 14:11:51.764622 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-07-12 14:11:51.764629 | orchestrator | Saturday 12 July 2025 14:08:38 +0000 (0:00:01.705) 0:05:27.643 ********* 2025-07-12 14:11:51.764635 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-07-12 14:11:51.764642 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-07-12 14:11:51.764649 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:51.764655 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-07-12 14:11:51.764662 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-07-12 14:11:51.764668 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:51.764675 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-07-12 14:11:51.764681 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-07-12 14:11:51.764688 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:51.764694 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-07-12 14:11:51.764701 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-07-12 14:11:51.764707 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-07-12 14:11:51.764714 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-07-12 14:11:51.764720 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.764727 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.764733 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-07-12 14:11:51.764740 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-07-12 14:11:51.764746 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.764753 | orchestrator | 2025-07-12 14:11:51.764759 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-07-12 14:11:51.764766 | orchestrator | Saturday 12 July 2025 14:08:39 +0000 (0:00:00.632) 0:05:28.275 ********* 2025-07-12 14:11:51.764773 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 14:11:51.764784 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 14:11:51.764796 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 14:11:51.764814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 14:11:51.764826 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 14:11:51.764837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 14:11:51.764848 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 14:11:51.764865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 14:11:51.764877 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 14:11:51.764904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.764918 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.764930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.764942 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.764959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.764971 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:51.764990 | orchestrator | 2025-07-12 14:11:51.764998 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-12 14:11:51.765005 | orchestrator | Saturday 12 July 2025 14:08:41 +0000 (0:00:02.881) 0:05:31.157 ********* 2025-07-12 14:11:51.765011 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:51.765018 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:51.765025 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:51.765035 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.765042 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.765049 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.765055 | orchestrator | 2025-07-12 14:11:51.765062 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-12 14:11:51.765068 | orchestrator | Saturday 12 July 2025 14:08:42 +0000 (0:00:00.577) 0:05:31.735 ********* 2025-07-12 14:11:51.765075 | orchestrator | 2025-07-12 14:11:51.765081 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-12 14:11:51.765088 | orchestrator | Saturday 12 July 2025 14:08:42 +0000 (0:00:00.315) 0:05:32.051 ********* 2025-07-12 14:11:51.765094 | orchestrator | 2025-07-12 14:11:51.765101 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-12 14:11:51.765107 | orchestrator | Saturday 12 July 2025 14:08:42 +0000 (0:00:00.139) 0:05:32.190 ********* 2025-07-12 14:11:51.765114 | orchestrator | 2025-07-12 14:11:51.765120 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-12 14:11:51.765127 | orchestrator | Saturday 12 July 2025 14:08:43 +0000 (0:00:00.132) 0:05:32.323 ********* 2025-07-12 14:11:51.765134 | orchestrator | 2025-07-12 14:11:51.765140 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-12 14:11:51.765147 | orchestrator | Saturday 12 July 2025 14:08:43 +0000 (0:00:00.129) 0:05:32.452 ********* 2025-07-12 14:11:51.765153 | orchestrator | 2025-07-12 14:11:51.765179 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-12 14:11:51.765186 | orchestrator | Saturday 12 July 2025 14:08:43 +0000 (0:00:00.127) 0:05:32.580 ********* 2025-07-12 14:11:51.765193 | orchestrator | 2025-07-12 14:11:51.765199 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-07-12 14:11:51.765206 | orchestrator | Saturday 12 July 2025 14:08:43 +0000 (0:00:00.127) 0:05:32.707 ********* 2025-07-12 14:11:51.765212 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:51.765219 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:11:51.765225 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:11:51.765232 | orchestrator | 2025-07-12 14:11:51.765238 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-07-12 14:11:51.765245 | orchestrator | Saturday 12 July 2025 14:08:55 +0000 (0:00:11.981) 0:05:44.689 ********* 2025-07-12 14:11:51.765251 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:51.765258 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:11:51.765264 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:11:51.765270 | orchestrator | 2025-07-12 14:11:51.765277 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-07-12 14:11:51.765284 | orchestrator | Saturday 12 July 2025 14:09:07 +0000 (0:00:11.829) 0:05:56.518 ********* 2025-07-12 14:11:51.765298 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:11:51.765304 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:11:51.765310 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:11:51.765317 | orchestrator | 2025-07-12 14:11:51.765323 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-07-12 14:11:51.765330 | orchestrator | Saturday 12 July 2025 14:09:32 +0000 (0:00:24.939) 0:06:21.458 ********* 2025-07-12 14:11:51.765336 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:11:51.765343 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:11:51.765349 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:11:51.765356 | orchestrator | 2025-07-12 14:11:51.765362 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-07-12 14:11:51.765369 | orchestrator | Saturday 12 July 2025 14:10:14 +0000 (0:00:42.024) 0:07:03.482 ********* 2025-07-12 14:11:51.765375 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:11:51.765382 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:11:51.765388 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:11:51.765395 | orchestrator | 2025-07-12 14:11:51.765401 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-07-12 14:11:51.765408 | orchestrator | Saturday 12 July 2025 14:10:15 +0000 (0:00:01.084) 0:07:04.567 ********* 2025-07-12 14:11:51.765414 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:11:51.765421 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:11:51.765427 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:11:51.765434 | orchestrator | 2025-07-12 14:11:51.765440 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-07-12 14:11:51.765447 | orchestrator | Saturday 12 July 2025 14:10:16 +0000 (0:00:00.788) 0:07:05.355 ********* 2025-07-12 14:11:51.765453 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:11:51.765460 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:11:51.765470 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:11:51.765476 | orchestrator | 2025-07-12 14:11:51.765483 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-07-12 14:11:51.765490 | orchestrator | Saturday 12 July 2025 14:10:42 +0000 (0:00:26.649) 0:07:32.005 ********* 2025-07-12 14:11:51.765496 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:51.765502 | orchestrator | 2025-07-12 14:11:51.765512 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-07-12 14:11:51.765523 | orchestrator | Saturday 12 July 2025 14:10:42 +0000 (0:00:00.120) 0:07:32.126 ********* 2025-07-12 14:11:51.765533 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:51.765545 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:51.765556 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.765567 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.765578 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.765589 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-07-12 14:11:51.765596 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-07-12 14:11:51.765602 | orchestrator | 2025-07-12 14:11:51.765609 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-07-12 14:11:51.765616 | orchestrator | Saturday 12 July 2025 14:11:04 +0000 (0:00:21.693) 0:07:53.819 ********* 2025-07-12 14:11:51.765622 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:51.765629 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:51.765635 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.765642 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:51.765653 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.765660 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.765666 | orchestrator | 2025-07-12 14:11:51.765673 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-07-12 14:11:51.765679 | orchestrator | Saturday 12 July 2025 14:11:13 +0000 (0:00:08.640) 0:08:02.459 ********* 2025-07-12 14:11:51.765686 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:51.765699 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.765706 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:51.765712 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.765719 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.765725 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2025-07-12 14:11:51.765732 | orchestrator | 2025-07-12 14:11:51.765739 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-07-12 14:11:51.765745 | orchestrator | Saturday 12 July 2025 14:11:17 +0000 (0:00:04.387) 0:08:06.846 ********* 2025-07-12 14:11:51.765752 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-07-12 14:11:51.765758 | orchestrator | 2025-07-12 14:11:51.765765 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-07-12 14:11:51.765771 | orchestrator | Saturday 12 July 2025 14:11:29 +0000 (0:00:11.847) 0:08:18.694 ********* 2025-07-12 14:11:51.765778 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-07-12 14:11:51.765784 | orchestrator | 2025-07-12 14:11:51.765791 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-07-12 14:11:51.765797 | orchestrator | Saturday 12 July 2025 14:11:30 +0000 (0:00:01.370) 0:08:20.064 ********* 2025-07-12 14:11:51.765804 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:51.765810 | orchestrator | 2025-07-12 14:11:51.765817 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-07-12 14:11:51.765823 | orchestrator | Saturday 12 July 2025 14:11:32 +0000 (0:00:01.269) 0:08:21.333 ********* 2025-07-12 14:11:51.765830 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-07-12 14:11:51.765836 | orchestrator | 2025-07-12 14:11:51.765843 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-07-12 14:11:51.765849 | orchestrator | Saturday 12 July 2025 14:11:42 +0000 (0:00:10.179) 0:08:31.513 ********* 2025-07-12 14:11:51.765856 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:11:51.765862 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:11:51.765869 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:11:51.765875 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:11:51.765882 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:11:51.765888 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:11:51.765895 | orchestrator | 2025-07-12 14:11:51.765901 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-07-12 14:11:51.765908 | orchestrator | 2025-07-12 14:11:51.765914 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-07-12 14:11:51.765921 | orchestrator | Saturday 12 July 2025 14:11:43 +0000 (0:00:01.734) 0:08:33.247 ********* 2025-07-12 14:11:51.765927 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:51.765934 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:11:51.765941 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:11:51.765947 | orchestrator | 2025-07-12 14:11:51.765954 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-07-12 14:11:51.765960 | orchestrator | 2025-07-12 14:11:51.765967 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-07-12 14:11:51.765973 | orchestrator | Saturday 12 July 2025 14:11:45 +0000 (0:00:01.137) 0:08:34.385 ********* 2025-07-12 14:11:51.765980 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.765986 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.765993 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.765999 | orchestrator | 2025-07-12 14:11:51.766006 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-07-12 14:11:51.766045 | orchestrator | 2025-07-12 14:11:51.766060 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-07-12 14:11:51.766071 | orchestrator | Saturday 12 July 2025 14:11:45 +0000 (0:00:00.509) 0:08:34.894 ********* 2025-07-12 14:11:51.766083 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-07-12 14:11:51.766094 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-07-12 14:11:51.766114 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-07-12 14:11:51.766126 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-07-12 14:11:51.766142 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-07-12 14:11:51.766154 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-07-12 14:11:51.766183 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:51.766195 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-07-12 14:11:51.766206 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-07-12 14:11:51.766213 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-07-12 14:11:51.766220 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-07-12 14:11:51.766226 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-07-12 14:11:51.766233 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-07-12 14:11:51.766239 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:51.766246 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-07-12 14:11:51.766252 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-07-12 14:11:51.766259 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-07-12 14:11:51.766265 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-07-12 14:11:51.766272 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-07-12 14:11:51.766278 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-07-12 14:11:51.766285 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:51.766292 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-07-12 14:11:51.766304 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-07-12 14:11:51.766311 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-07-12 14:11:51.766317 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-07-12 14:11:51.766324 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-07-12 14:11:51.766330 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-07-12 14:11:51.766337 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.766343 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-07-12 14:11:51.766350 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-07-12 14:11:51.766356 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-07-12 14:11:51.766363 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-07-12 14:11:51.766369 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-07-12 14:11:51.766376 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-07-12 14:11:51.766382 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.766389 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-07-12 14:11:51.766395 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-07-12 14:11:51.766402 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-07-12 14:11:51.766409 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-07-12 14:11:51.766415 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-07-12 14:11:51.766422 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-07-12 14:11:51.766428 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.766435 | orchestrator | 2025-07-12 14:11:51.766441 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-07-12 14:11:51.766448 | orchestrator | 2025-07-12 14:11:51.766455 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-07-12 14:11:51.766461 | orchestrator | Saturday 12 July 2025 14:11:46 +0000 (0:00:01.327) 0:08:36.221 ********* 2025-07-12 14:11:51.766468 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-07-12 14:11:51.766480 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-07-12 14:11:51.766487 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.766493 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-07-12 14:11:51.766500 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-07-12 14:11:51.766506 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.766513 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-07-12 14:11:51.766519 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-07-12 14:11:51.766526 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.766532 | orchestrator | 2025-07-12 14:11:51.766539 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-07-12 14:11:51.766545 | orchestrator | 2025-07-12 14:11:51.766552 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-07-12 14:11:51.766558 | orchestrator | Saturday 12 July 2025 14:11:47 +0000 (0:00:00.695) 0:08:36.917 ********* 2025-07-12 14:11:51.766565 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.766571 | orchestrator | 2025-07-12 14:11:51.766578 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-07-12 14:11:51.766585 | orchestrator | 2025-07-12 14:11:51.766591 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-07-12 14:11:51.766598 | orchestrator | Saturday 12 July 2025 14:11:48 +0000 (0:00:00.647) 0:08:37.565 ********* 2025-07-12 14:11:51.766604 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:51.766611 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:51.766617 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:51.766624 | orchestrator | 2025-07-12 14:11:51.766630 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:11:51.766637 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:11:51.766644 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-07-12 14:11:51.766652 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-07-12 14:11:51.766659 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-07-12 14:11:51.766666 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-07-12 14:11:51.766672 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-07-12 14:11:51.766679 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-07-12 14:11:51.766685 | orchestrator | 2025-07-12 14:11:51.766692 | orchestrator | 2025-07-12 14:11:51.766699 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:11:51.766705 | orchestrator | Saturday 12 July 2025 14:11:48 +0000 (0:00:00.421) 0:08:37.986 ********* 2025-07-12 14:11:51.766712 | orchestrator | =============================================================================== 2025-07-12 14:11:51.766756 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 42.02s 2025-07-12 14:11:51.766764 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 35.99s 2025-07-12 14:11:51.766771 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 26.65s 2025-07-12 14:11:51.766777 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 24.94s 2025-07-12 14:11:51.766784 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 22.73s 2025-07-12 14:11:51.766795 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.69s 2025-07-12 14:11:51.766802 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.54s 2025-07-12 14:11:51.766808 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.17s 2025-07-12 14:11:51.766815 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 13.95s 2025-07-12 14:11:51.766821 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 11.98s 2025-07-12 14:11:51.766828 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.85s 2025-07-12 14:11:51.766834 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 11.83s 2025-07-12 14:11:51.766841 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.77s 2025-07-12 14:11:51.766847 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.73s 2025-07-12 14:11:51.766854 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.32s 2025-07-12 14:11:51.766860 | orchestrator | nova : Restart nova-api container -------------------------------------- 11.14s 2025-07-12 14:11:51.766866 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.18s 2025-07-12 14:11:51.766873 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.64s 2025-07-12 14:11:51.766879 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.32s 2025-07-12 14:11:51.766886 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.16s 2025-07-12 14:11:54.795241 | orchestrator | 2025-07-12 14:11:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:11:57.837253 | orchestrator | 2025-07-12 14:11:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:12:00.878699 | orchestrator | 2025-07-12 14:12:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:12:03.920453 | orchestrator | 2025-07-12 14:12:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:12:06.959286 | orchestrator | 2025-07-12 14:12:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:12:10.000439 | orchestrator | 2025-07-12 14:12:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:12:13.040915 | orchestrator | 2025-07-12 14:12:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:12:16.074383 | orchestrator | 2025-07-12 14:12:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:12:19.111392 | orchestrator | 2025-07-12 14:12:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:12:22.147238 | orchestrator | 2025-07-12 14:12:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:12:25.186815 | orchestrator | 2025-07-12 14:12:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:12:28.225259 | orchestrator | 2025-07-12 14:12:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:12:31.277023 | orchestrator | 2025-07-12 14:12:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:12:34.325613 | orchestrator | 2025-07-12 14:12:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:12:37.366570 | orchestrator | 2025-07-12 14:12:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:12:40.407722 | orchestrator | 2025-07-12 14:12:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:12:43.447393 | orchestrator | 2025-07-12 14:12:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:12:46.487009 | orchestrator | 2025-07-12 14:12:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:12:49.526944 | orchestrator | 2025-07-12 14:12:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:12:52.571348 | orchestrator | 2025-07-12 14:12:52.875739 | orchestrator | 2025-07-12 14:12:52.885680 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sat Jul 12 14:12:52 UTC 2025 2025-07-12 14:12:52.886008 | orchestrator | 2025-07-12 14:12:53.373600 | orchestrator | ok: Runtime: 0:37:20.953378 2025-07-12 14:12:53.644206 | 2025-07-12 14:12:53.644372 | TASK [Bootstrap services] 2025-07-12 14:12:54.345573 | orchestrator | 2025-07-12 14:12:54.345778 | orchestrator | # BOOTSTRAP 2025-07-12 14:12:54.345802 | orchestrator | 2025-07-12 14:12:54.345818 | orchestrator | + set -e 2025-07-12 14:12:54.345830 | orchestrator | + echo 2025-07-12 14:12:54.345844 | orchestrator | + echo '# BOOTSTRAP' 2025-07-12 14:12:54.345861 | orchestrator | + echo 2025-07-12 14:12:54.345905 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-07-12 14:12:54.355025 | orchestrator | + set -e 2025-07-12 14:12:54.355083 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-07-12 14:12:58.797897 | orchestrator | 2025-07-12 14:12:58 | INFO  | It takes a moment until task f1a026c0-8110-4de7-aec8-88dcca80bd28 (flavor-manager) has been started and output is visible here. 2025-07-12 14:13:07.335618 | orchestrator | 2025-07-12 14:13:02 | INFO  | Flavor SCS-1V-4 created 2025-07-12 14:13:07.335756 | orchestrator | 2025-07-12 14:13:03 | INFO  | Flavor SCS-2V-8 created 2025-07-12 14:13:07.335775 | orchestrator | 2025-07-12 14:13:03 | INFO  | Flavor SCS-4V-16 created 2025-07-12 14:13:07.335788 | orchestrator | 2025-07-12 14:13:03 | INFO  | Flavor SCS-8V-32 created 2025-07-12 14:13:07.335800 | orchestrator | 2025-07-12 14:13:04 | INFO  | Flavor SCS-1V-2 created 2025-07-12 14:13:07.335811 | orchestrator | 2025-07-12 14:13:04 | INFO  | Flavor SCS-2V-4 created 2025-07-12 14:13:07.335822 | orchestrator | 2025-07-12 14:13:04 | INFO  | Flavor SCS-4V-8 created 2025-07-12 14:13:07.335834 | orchestrator | 2025-07-12 14:13:04 | INFO  | Flavor SCS-8V-16 created 2025-07-12 14:13:07.335860 | orchestrator | 2025-07-12 14:13:04 | INFO  | Flavor SCS-16V-32 created 2025-07-12 14:13:07.335871 | orchestrator | 2025-07-12 14:13:04 | INFO  | Flavor SCS-1V-8 created 2025-07-12 14:13:07.335882 | orchestrator | 2025-07-12 14:13:04 | INFO  | Flavor SCS-2V-16 created 2025-07-12 14:13:07.335893 | orchestrator | 2025-07-12 14:13:05 | INFO  | Flavor SCS-4V-32 created 2025-07-12 14:13:07.335904 | orchestrator | 2025-07-12 14:13:05 | INFO  | Flavor SCS-1L-1 created 2025-07-12 14:13:07.335915 | orchestrator | 2025-07-12 14:13:05 | INFO  | Flavor SCS-2V-4-20s created 2025-07-12 14:13:07.335926 | orchestrator | 2025-07-12 14:13:05 | INFO  | Flavor SCS-4V-16-100s created 2025-07-12 14:13:07.335937 | orchestrator | 2025-07-12 14:13:05 | INFO  | Flavor SCS-1V-4-10 created 2025-07-12 14:13:07.335948 | orchestrator | 2025-07-12 14:13:05 | INFO  | Flavor SCS-2V-8-20 created 2025-07-12 14:13:07.335959 | orchestrator | 2025-07-12 14:13:05 | INFO  | Flavor SCS-4V-16-50 created 2025-07-12 14:13:07.335970 | orchestrator | 2025-07-12 14:13:05 | INFO  | Flavor SCS-8V-32-100 created 2025-07-12 14:13:07.335981 | orchestrator | 2025-07-12 14:13:06 | INFO  | Flavor SCS-1V-2-5 created 2025-07-12 14:13:07.335992 | orchestrator | 2025-07-12 14:13:06 | INFO  | Flavor SCS-2V-4-10 created 2025-07-12 14:13:07.336003 | orchestrator | 2025-07-12 14:13:06 | INFO  | Flavor SCS-4V-8-20 created 2025-07-12 14:13:07.336014 | orchestrator | 2025-07-12 14:13:06 | INFO  | Flavor SCS-8V-16-50 created 2025-07-12 14:13:07.336025 | orchestrator | 2025-07-12 14:13:06 | INFO  | Flavor SCS-16V-32-100 created 2025-07-12 14:13:07.336036 | orchestrator | 2025-07-12 14:13:06 | INFO  | Flavor SCS-1V-8-20 created 2025-07-12 14:13:07.336047 | orchestrator | 2025-07-12 14:13:06 | INFO  | Flavor SCS-2V-16-50 created 2025-07-12 14:13:07.336057 | orchestrator | 2025-07-12 14:13:06 | INFO  | Flavor SCS-4V-32-100 created 2025-07-12 14:13:07.336069 | orchestrator | 2025-07-12 14:13:07 | INFO  | Flavor SCS-1L-1-5 created 2025-07-12 14:13:09.487651 | orchestrator | 2025-07-12 14:13:09 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-07-12 14:13:19.633877 | orchestrator | 2025-07-12 14:13:19 | INFO  | Task e72b3b53-3703-4581-abf7-1f0849beb4cf (bootstrap-basic) was prepared for execution. 2025-07-12 14:13:19.634108 | orchestrator | 2025-07-12 14:13:19 | INFO  | It takes a moment until task e72b3b53-3703-4581-abf7-1f0849beb4cf (bootstrap-basic) has been started and output is visible here. 2025-07-12 14:14:23.219912 | orchestrator | 2025-07-12 14:14:23.220034 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-07-12 14:14:23.220050 | orchestrator | 2025-07-12 14:14:23.220062 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 14:14:23.220074 | orchestrator | Saturday 12 July 2025 14:13:23 +0000 (0:00:00.079) 0:00:00.079 ********* 2025-07-12 14:14:23.220085 | orchestrator | ok: [localhost] 2025-07-12 14:14:23.220096 | orchestrator | 2025-07-12 14:14:23.220107 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-07-12 14:14:23.220121 | orchestrator | Saturday 12 July 2025 14:13:25 +0000 (0:00:01.909) 0:00:01.989 ********* 2025-07-12 14:14:23.220132 | orchestrator | ok: [localhost] 2025-07-12 14:14:23.220194 | orchestrator | 2025-07-12 14:14:23.220207 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-07-12 14:14:23.220217 | orchestrator | Saturday 12 July 2025 14:13:33 +0000 (0:00:08.101) 0:00:10.090 ********* 2025-07-12 14:14:23.220228 | orchestrator | changed: [localhost] 2025-07-12 14:14:23.220239 | orchestrator | 2025-07-12 14:14:23.220250 | orchestrator | TASK [Get volume type local] *************************************************** 2025-07-12 14:14:23.220260 | orchestrator | Saturday 12 July 2025 14:13:41 +0000 (0:00:07.696) 0:00:17.786 ********* 2025-07-12 14:14:23.220271 | orchestrator | ok: [localhost] 2025-07-12 14:14:23.220282 | orchestrator | 2025-07-12 14:14:23.220293 | orchestrator | TASK [Create volume type local] ************************************************ 2025-07-12 14:14:23.220303 | orchestrator | Saturday 12 July 2025 14:13:48 +0000 (0:00:06.987) 0:00:24.773 ********* 2025-07-12 14:14:23.220314 | orchestrator | changed: [localhost] 2025-07-12 14:14:23.220329 | orchestrator | 2025-07-12 14:14:23.220340 | orchestrator | TASK [Create public network] *************************************************** 2025-07-12 14:14:23.220351 | orchestrator | Saturday 12 July 2025 14:13:55 +0000 (0:00:07.253) 0:00:32.027 ********* 2025-07-12 14:14:23.220361 | orchestrator | changed: [localhost] 2025-07-12 14:14:23.220372 | orchestrator | 2025-07-12 14:14:23.220382 | orchestrator | TASK [Set public network to default] ******************************************* 2025-07-12 14:14:23.220408 | orchestrator | Saturday 12 July 2025 14:14:02 +0000 (0:00:06.928) 0:00:38.956 ********* 2025-07-12 14:14:23.220419 | orchestrator | changed: [localhost] 2025-07-12 14:14:23.220431 | orchestrator | 2025-07-12 14:14:23.220455 | orchestrator | TASK [Create public subnet] **************************************************** 2025-07-12 14:14:23.220468 | orchestrator | Saturday 12 July 2025 14:14:09 +0000 (0:00:06.972) 0:00:45.929 ********* 2025-07-12 14:14:23.220480 | orchestrator | changed: [localhost] 2025-07-12 14:14:23.220492 | orchestrator | 2025-07-12 14:14:23.220504 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-07-12 14:14:23.220516 | orchestrator | Saturday 12 July 2025 14:14:15 +0000 (0:00:05.259) 0:00:51.188 ********* 2025-07-12 14:14:23.220528 | orchestrator | changed: [localhost] 2025-07-12 14:14:23.220540 | orchestrator | 2025-07-12 14:14:23.220552 | orchestrator | TASK [Create manager role] ***************************************************** 2025-07-12 14:14:23.220564 | orchestrator | Saturday 12 July 2025 14:14:19 +0000 (0:00:04.416) 0:00:55.605 ********* 2025-07-12 14:14:23.220576 | orchestrator | ok: [localhost] 2025-07-12 14:14:23.220588 | orchestrator | 2025-07-12 14:14:23.220600 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:14:23.220613 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:14:23.220626 | orchestrator | 2025-07-12 14:14:23.220638 | orchestrator | 2025-07-12 14:14:23.220650 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:14:23.220662 | orchestrator | Saturday 12 July 2025 14:14:22 +0000 (0:00:03.517) 0:00:59.123 ********* 2025-07-12 14:14:23.220699 | orchestrator | =============================================================================== 2025-07-12 14:14:23.220711 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.10s 2025-07-12 14:14:23.220723 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.70s 2025-07-12 14:14:23.220735 | orchestrator | Create volume type local ------------------------------------------------ 7.25s 2025-07-12 14:14:23.220747 | orchestrator | Get volume type local --------------------------------------------------- 6.99s 2025-07-12 14:14:23.220758 | orchestrator | Set public network to default ------------------------------------------- 6.97s 2025-07-12 14:14:23.220771 | orchestrator | Create public network --------------------------------------------------- 6.93s 2025-07-12 14:14:23.220783 | orchestrator | Create public subnet ---------------------------------------------------- 5.26s 2025-07-12 14:14:23.220794 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.42s 2025-07-12 14:14:23.220804 | orchestrator | Create manager role ----------------------------------------------------- 3.52s 2025-07-12 14:14:23.220815 | orchestrator | Gathering Facts --------------------------------------------------------- 1.91s 2025-07-12 14:14:25.453934 | orchestrator | 2025-07-12 14:14:25 | INFO  | It takes a moment until task c50cfa49-8630-4086-bb27-d3dc4116dbd7 (image-manager) has been started and output is visible here. 2025-07-12 14:15:05.209561 | orchestrator | 2025-07-12 14:14:28 | INFO  | Processing image 'Cirros 0.6.2' 2025-07-12 14:15:05.209666 | orchestrator | 2025-07-12 14:14:29 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-07-12 14:15:05.209686 | orchestrator | 2025-07-12 14:14:29 | INFO  | Importing image Cirros 0.6.2 2025-07-12 14:15:05.209698 | orchestrator | 2025-07-12 14:14:29 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-07-12 14:15:05.209710 | orchestrator | 2025-07-12 14:14:30 | INFO  | Waiting for image to leave queued state... 2025-07-12 14:15:05.209722 | orchestrator | 2025-07-12 14:14:32 | INFO  | Waiting for import to complete... 2025-07-12 14:15:05.209732 | orchestrator | 2025-07-12 14:14:43 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-07-12 14:15:05.209743 | orchestrator | 2025-07-12 14:14:43 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-07-12 14:15:05.209754 | orchestrator | 2025-07-12 14:14:43 | INFO  | Setting internal_version = 0.6.2 2025-07-12 14:15:05.209765 | orchestrator | 2025-07-12 14:14:43 | INFO  | Setting image_original_user = cirros 2025-07-12 14:15:05.209776 | orchestrator | 2025-07-12 14:14:43 | INFO  | Adding tag os:cirros 2025-07-12 14:15:05.209787 | orchestrator | 2025-07-12 14:14:43 | INFO  | Setting property architecture: x86_64 2025-07-12 14:15:05.209798 | orchestrator | 2025-07-12 14:14:43 | INFO  | Setting property hw_disk_bus: scsi 2025-07-12 14:15:05.209808 | orchestrator | 2025-07-12 14:14:44 | INFO  | Setting property hw_rng_model: virtio 2025-07-12 14:15:05.209819 | orchestrator | 2025-07-12 14:14:44 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-07-12 14:15:05.209830 | orchestrator | 2025-07-12 14:14:44 | INFO  | Setting property hw_watchdog_action: reset 2025-07-12 14:15:05.209840 | orchestrator | 2025-07-12 14:14:44 | INFO  | Setting property hypervisor_type: qemu 2025-07-12 14:15:05.209851 | orchestrator | 2025-07-12 14:14:44 | INFO  | Setting property os_distro: cirros 2025-07-12 14:15:05.209861 | orchestrator | 2025-07-12 14:14:45 | INFO  | Setting property replace_frequency: never 2025-07-12 14:15:05.209872 | orchestrator | 2025-07-12 14:14:45 | INFO  | Setting property uuid_validity: none 2025-07-12 14:15:05.209883 | orchestrator | 2025-07-12 14:14:45 | INFO  | Setting property provided_until: none 2025-07-12 14:15:05.209914 | orchestrator | 2025-07-12 14:14:45 | INFO  | Setting property image_description: Cirros 2025-07-12 14:15:05.209932 | orchestrator | 2025-07-12 14:14:46 | INFO  | Setting property image_name: Cirros 2025-07-12 14:15:05.209943 | orchestrator | 2025-07-12 14:14:46 | INFO  | Setting property internal_version: 0.6.2 2025-07-12 14:15:05.209959 | orchestrator | 2025-07-12 14:14:46 | INFO  | Setting property image_original_user: cirros 2025-07-12 14:15:05.209970 | orchestrator | 2025-07-12 14:14:46 | INFO  | Setting property os_version: 0.6.2 2025-07-12 14:15:05.209981 | orchestrator | 2025-07-12 14:14:46 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-07-12 14:15:05.209992 | orchestrator | 2025-07-12 14:14:47 | INFO  | Setting property image_build_date: 2023-05-30 2025-07-12 14:15:05.210003 | orchestrator | 2025-07-12 14:14:47 | INFO  | Checking status of 'Cirros 0.6.2' 2025-07-12 14:15:05.210013 | orchestrator | 2025-07-12 14:14:47 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-07-12 14:15:05.210076 | orchestrator | 2025-07-12 14:14:47 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-07-12 14:15:05.210088 | orchestrator | 2025-07-12 14:14:47 | INFO  | Processing image 'Cirros 0.6.3' 2025-07-12 14:15:05.210098 | orchestrator | 2025-07-12 14:14:47 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-07-12 14:15:05.210109 | orchestrator | 2025-07-12 14:14:47 | INFO  | Importing image Cirros 0.6.3 2025-07-12 14:15:05.210120 | orchestrator | 2025-07-12 14:14:47 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-07-12 14:15:05.210162 | orchestrator | 2025-07-12 14:14:48 | INFO  | Waiting for image to leave queued state... 2025-07-12 14:15:05.210173 | orchestrator | 2025-07-12 14:14:50 | INFO  | Waiting for import to complete... 2025-07-12 14:15:05.210183 | orchestrator | 2025-07-12 14:15:00 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-07-12 14:15:05.210211 | orchestrator | 2025-07-12 14:15:00 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-07-12 14:15:05.210223 | orchestrator | 2025-07-12 14:15:00 | INFO  | Setting internal_version = 0.6.3 2025-07-12 14:15:05.210275 | orchestrator | 2025-07-12 14:15:00 | INFO  | Setting image_original_user = cirros 2025-07-12 14:15:05.210295 | orchestrator | 2025-07-12 14:15:00 | INFO  | Adding tag os:cirros 2025-07-12 14:15:05.210314 | orchestrator | 2025-07-12 14:15:00 | INFO  | Setting property architecture: x86_64 2025-07-12 14:15:05.210325 | orchestrator | 2025-07-12 14:15:00 | INFO  | Setting property hw_disk_bus: scsi 2025-07-12 14:15:05.210336 | orchestrator | 2025-07-12 14:15:01 | INFO  | Setting property hw_rng_model: virtio 2025-07-12 14:15:05.210346 | orchestrator | 2025-07-12 14:15:01 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-07-12 14:15:05.210357 | orchestrator | 2025-07-12 14:15:01 | INFO  | Setting property hw_watchdog_action: reset 2025-07-12 14:15:05.210367 | orchestrator | 2025-07-12 14:15:01 | INFO  | Setting property hypervisor_type: qemu 2025-07-12 14:15:05.210378 | orchestrator | 2025-07-12 14:15:02 | INFO  | Setting property os_distro: cirros 2025-07-12 14:15:05.210388 | orchestrator | 2025-07-12 14:15:02 | INFO  | Setting property replace_frequency: never 2025-07-12 14:15:05.210399 | orchestrator | 2025-07-12 14:15:02 | INFO  | Setting property uuid_validity: none 2025-07-12 14:15:05.210420 | orchestrator | 2025-07-12 14:15:02 | INFO  | Setting property provided_until: none 2025-07-12 14:15:05.210431 | orchestrator | 2025-07-12 14:15:02 | INFO  | Setting property image_description: Cirros 2025-07-12 14:15:05.210441 | orchestrator | 2025-07-12 14:15:03 | INFO  | Setting property image_name: Cirros 2025-07-12 14:15:05.210452 | orchestrator | 2025-07-12 14:15:03 | INFO  | Setting property internal_version: 0.6.3 2025-07-12 14:15:05.210462 | orchestrator | 2025-07-12 14:15:03 | INFO  | Setting property image_original_user: cirros 2025-07-12 14:15:05.210473 | orchestrator | 2025-07-12 14:15:03 | INFO  | Setting property os_version: 0.6.3 2025-07-12 14:15:05.210483 | orchestrator | 2025-07-12 14:15:03 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-07-12 14:15:05.210494 | orchestrator | 2025-07-12 14:15:04 | INFO  | Setting property image_build_date: 2024-09-26 2025-07-12 14:15:05.210504 | orchestrator | 2025-07-12 14:15:04 | INFO  | Checking status of 'Cirros 0.6.3' 2025-07-12 14:15:05.210515 | orchestrator | 2025-07-12 14:15:04 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-07-12 14:15:05.210531 | orchestrator | 2025-07-12 14:15:04 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-07-12 14:15:05.532890 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-07-12 14:15:07.486212 | orchestrator | 2025-07-12 14:15:07 | INFO  | date: 2025-07-12 2025-07-12 14:15:07.486332 | orchestrator | 2025-07-12 14:15:07 | INFO  | image: octavia-amphora-haproxy-2024.2.20250712.qcow2 2025-07-12 14:15:07.486561 | orchestrator | 2025-07-12 14:15:07 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250712.qcow2 2025-07-12 14:15:07.486600 | orchestrator | 2025-07-12 14:15:07 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250712.qcow2.CHECKSUM 2025-07-12 14:15:07.509617 | orchestrator | 2025-07-12 14:15:07 | INFO  | checksum: c95855ae58dddb977df0d8e11b851fc66dd0abac9e608812e6020c0a95df8f26 2025-07-12 14:15:07.585286 | orchestrator | 2025-07-12 14:15:07 | INFO  | It takes a moment until task 921acd1b-9add-4906-8215-89cceb58bea5 (image-manager) has been started and output is visible here. 2025-07-12 14:16:07.437539 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-07-12 14:16:07.437687 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-07-12 14:16:07.437717 | orchestrator | 2025-07-12 14:15:09 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-07-12' 2025-07-12 14:16:07.437746 | orchestrator | 2025-07-12 14:15:09 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250712.qcow2: 200 2025-07-12 14:16:07.437767 | orchestrator | 2025-07-12 14:15:09 | INFO  | Importing image OpenStack Octavia Amphora 2025-07-12 2025-07-12 14:16:07.437782 | orchestrator | 2025-07-12 14:15:09 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250712.qcow2 2025-07-12 14:16:07.437794 | orchestrator | 2025-07-12 14:15:09 | INFO  | Waiting for image to leave queued state... 2025-07-12 14:16:07.437834 | orchestrator | 2025-07-12 14:15:12 | INFO  | Waiting for import to complete... 2025-07-12 14:16:07.437847 | orchestrator | 2025-07-12 14:15:22 | INFO  | Waiting for import to complete... 2025-07-12 14:16:07.437857 | orchestrator | 2025-07-12 14:15:32 | INFO  | Waiting for import to complete... 2025-07-12 14:16:07.437868 | orchestrator | 2025-07-12 14:15:42 | INFO  | Waiting for import to complete... 2025-07-12 14:16:07.437878 | orchestrator | 2025-07-12 14:15:52 | INFO  | Waiting for import to complete... 2025-07-12 14:16:07.437890 | orchestrator | 2025-07-12 14:16:02 | INFO  | Import of 'OpenStack Octavia Amphora 2025-07-12' successfully completed, reloading images 2025-07-12 14:16:07.437901 | orchestrator | 2025-07-12 14:16:03 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-07-12' 2025-07-12 14:16:07.437912 | orchestrator | 2025-07-12 14:16:03 | INFO  | Setting internal_version = 2025-07-12 2025-07-12 14:16:07.437923 | orchestrator | 2025-07-12 14:16:03 | INFO  | Setting image_original_user = ubuntu 2025-07-12 14:16:07.437933 | orchestrator | 2025-07-12 14:16:03 | INFO  | Adding tag amphora 2025-07-12 14:16:07.437944 | orchestrator | 2025-07-12 14:16:03 | INFO  | Adding tag os:ubuntu 2025-07-12 14:16:07.437954 | orchestrator | 2025-07-12 14:16:03 | INFO  | Setting property architecture: x86_64 2025-07-12 14:16:07.437965 | orchestrator | 2025-07-12 14:16:03 | INFO  | Setting property hw_disk_bus: scsi 2025-07-12 14:16:07.437975 | orchestrator | 2025-07-12 14:16:03 | INFO  | Setting property hw_rng_model: virtio 2025-07-12 14:16:07.437998 | orchestrator | 2025-07-12 14:16:04 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-07-12 14:16:07.438010 | orchestrator | 2025-07-12 14:16:04 | INFO  | Setting property hw_watchdog_action: reset 2025-07-12 14:16:07.438063 | orchestrator | 2025-07-12 14:16:04 | INFO  | Setting property hypervisor_type: qemu 2025-07-12 14:16:07.438077 | orchestrator | 2025-07-12 14:16:04 | INFO  | Setting property os_distro: ubuntu 2025-07-12 14:16:07.438091 | orchestrator | 2025-07-12 14:16:04 | INFO  | Setting property replace_frequency: quarterly 2025-07-12 14:16:07.438111 | orchestrator | 2025-07-12 14:16:05 | INFO  | Setting property uuid_validity: last-1 2025-07-12 14:16:07.438140 | orchestrator | 2025-07-12 14:16:05 | INFO  | Setting property provided_until: none 2025-07-12 14:16:07.438159 | orchestrator | 2025-07-12 14:16:05 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-07-12 14:16:07.438177 | orchestrator | 2025-07-12 14:16:05 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-07-12 14:16:07.438195 | orchestrator | 2025-07-12 14:16:06 | INFO  | Setting property internal_version: 2025-07-12 2025-07-12 14:16:07.438211 | orchestrator | 2025-07-12 14:16:06 | INFO  | Setting property image_original_user: ubuntu 2025-07-12 14:16:07.438226 | orchestrator | 2025-07-12 14:16:06 | INFO  | Setting property os_version: 2025-07-12 2025-07-12 14:16:07.438246 | orchestrator | 2025-07-12 14:16:06 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250712.qcow2 2025-07-12 14:16:07.438293 | orchestrator | 2025-07-12 14:16:06 | INFO  | Setting property image_build_date: 2025-07-12 2025-07-12 14:16:07.438314 | orchestrator | 2025-07-12 14:16:07 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-07-12' 2025-07-12 14:16:07.438331 | orchestrator | 2025-07-12 14:16:07 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-07-12' 2025-07-12 14:16:07.438391 | orchestrator | 2025-07-12 14:16:07 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-07-12 14:16:07.438404 | orchestrator | 2025-07-12 14:16:07 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-07-12 14:16:07.438416 | orchestrator | 2025-07-12 14:16:07 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-07-12 14:16:07.438427 | orchestrator | 2025-07-12 14:16:07 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-07-12 14:16:07.813342 | orchestrator | ok: Runtime: 0:03:13.758794 2025-07-12 14:16:07.889419 | 2025-07-12 14:16:07.889557 | TASK [Run checks] 2025-07-12 14:16:08.548636 | orchestrator | + set -e 2025-07-12 14:16:08.548765 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-12 14:16:08.548774 | orchestrator | ++ export INTERACTIVE=false 2025-07-12 14:16:08.548783 | orchestrator | ++ INTERACTIVE=false 2025-07-12 14:16:08.548789 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-12 14:16:08.548793 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-12 14:16:08.548807 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-07-12 14:16:08.549929 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-07-12 14:16:08.554823 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-07-12 14:16:08.554840 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-07-12 14:16:08.554848 | orchestrator | + echo 2025-07-12 14:16:08.554855 | orchestrator | 2025-07-12 14:16:08.554860 | orchestrator | # CHECK 2025-07-12 14:16:08.554864 | orchestrator | 2025-07-12 14:16:08.554874 | orchestrator | + echo '# CHECK' 2025-07-12 14:16:08.554879 | orchestrator | + echo 2025-07-12 14:16:08.555000 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-12 14:16:08.555995 | orchestrator | ++ semver 9.2.0 5.0.0 2025-07-12 14:16:08.602757 | orchestrator | 2025-07-12 14:16:08.602792 | orchestrator | ## Containers @ testbed-manager 2025-07-12 14:16:08.602798 | orchestrator | 2025-07-12 14:16:08.602803 | orchestrator | + [[ 1 -eq -1 ]] 2025-07-12 14:16:08.602808 | orchestrator | + echo 2025-07-12 14:16:08.602812 | orchestrator | + echo '## Containers @ testbed-manager' 2025-07-12 14:16:08.602817 | orchestrator | + echo 2025-07-12 14:16:08.602821 | orchestrator | + osism container testbed-manager ps 2025-07-12 14:16:10.798728 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-12 14:16:10.798862 | orchestrator | 48ef0bea129f registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_blackbox_exporter 2025-07-12 14:16:10.798874 | orchestrator | a6026a41b35c registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_alertmanager 2025-07-12 14:16:10.798879 | orchestrator | eb9619ed4dde registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2025-07-12 14:16:10.798887 | orchestrator | b986392a3ce1 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-07-12 14:16:10.798891 | orchestrator | 6c5bbd6d2c8d registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_server 2025-07-12 14:16:10.798895 | orchestrator | 4df84903b16d registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 18 minutes ago Up 18 minutes cephclient 2025-07-12 14:16:10.798905 | orchestrator | 6ff6685190cb registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-07-12 14:16:10.798909 | orchestrator | 5facfe19fcc9 registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-07-12 14:16:10.798938 | orchestrator | 3e226483637c registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-07-12 14:16:10.798943 | orchestrator | da0fd7e8a07c phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 32 minutes ago Up 32 minutes (healthy) 80/tcp phpmyadmin 2025-07-12 14:16:10.798947 | orchestrator | 4b21ae144e0f registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 33 minutes ago Up 32 minutes openstackclient 2025-07-12 14:16:10.798951 | orchestrator | 01b298a2a195 registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 33 minutes ago Up 33 minutes (healthy) 8080/tcp homer 2025-07-12 14:16:10.798954 | orchestrator | d80d591ce644 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 57 minutes ago Up 56 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-07-12 14:16:10.798961 | orchestrator | f73374e769eb registry.osism.tech/osism/inventory-reconciler:0.20250711.0 "/sbin/tini -- /entr…" About an hour ago Up 41 minutes (healthy) manager-inventory_reconciler-1 2025-07-12 14:16:10.798981 | orchestrator | ddf662d8484a registry.osism.tech/osism/ceph-ansible:0.20250711.0 "/entrypoint.sh osis…" About an hour ago Up 41 minutes (healthy) ceph-ansible 2025-07-12 14:16:10.798986 | orchestrator | c57a4ae7fa85 registry.osism.tech/osism/osism-ansible:0.20250711.0 "/entrypoint.sh osis…" About an hour ago Up 41 minutes (healthy) osism-ansible 2025-07-12 14:16:10.798990 | orchestrator | c7d7e8450943 registry.osism.tech/osism/osism-kubernetes:0.20250711.0 "/entrypoint.sh osis…" About an hour ago Up 41 minutes (healthy) osism-kubernetes 2025-07-12 14:16:10.798993 | orchestrator | 4108f561b15d registry.osism.tech/osism/kolla-ansible:0.20250711.0 "/entrypoint.sh osis…" About an hour ago Up 41 minutes (healthy) kolla-ansible 2025-07-12 14:16:10.798997 | orchestrator | 9570f5b258bf registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" About an hour ago Up 41 minutes (healthy) 8000/tcp manager-ara-server-1 2025-07-12 14:16:10.799001 | orchestrator | 14a54ae25938 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" About an hour ago Up 42 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-07-12 14:16:10.799005 | orchestrator | aef8d822b9f2 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" About an hour ago Up 42 minutes (healthy) manager-flower-1 2025-07-12 14:16:10.799009 | orchestrator | 3b5fac674a0e registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" About an hour ago Up 42 minutes (healthy) manager-beat-1 2025-07-12 14:16:10.799018 | orchestrator | 190dbe592f64 registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" About an hour ago Up 42 minutes (healthy) 3306/tcp manager-mariadb-1 2025-07-12 14:16:10.799022 | orchestrator | 3b7a62997411 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" About an hour ago Up 42 minutes (healthy) manager-openstack-1 2025-07-12 14:16:10.799026 | orchestrator | a6ee46178741 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- sleep…" About an hour ago Up 42 minutes (healthy) osismclient 2025-07-12 14:16:10.799029 | orchestrator | 37f07d836ea6 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" About an hour ago Up 42 minutes (healthy) manager-listener-1 2025-07-12 14:16:10.799033 | orchestrator | 733c652efa77 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" About an hour ago Up 42 minutes (healthy) 6379/tcp manager-redis-1 2025-07-12 14:16:10.799037 | orchestrator | f5c65a74a8cb registry.osism.tech/dockerhub/library/traefik:v3.4.3 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-07-12 14:16:11.084097 | orchestrator | 2025-07-12 14:16:11.084202 | orchestrator | ## Images @ testbed-manager 2025-07-12 14:16:11.084208 | orchestrator | 2025-07-12 14:16:11.084213 | orchestrator | + echo 2025-07-12 14:16:11.084218 | orchestrator | + echo '## Images @ testbed-manager' 2025-07-12 14:16:11.084223 | orchestrator | + echo 2025-07-12 14:16:11.084229 | orchestrator | + osism container testbed-manager images 2025-07-12 14:16:13.243408 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-12 14:16:13.243495 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20250711.0 fcbac8373342 4 hours ago 571MB 2025-07-12 14:16:13.243504 | orchestrator | registry.osism.tech/osism/homer v25.05.2 d2fcb41febbc 11 hours ago 11.5MB 2025-07-12 14:16:13.243509 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 751f5a3be689 11 hours ago 234MB 2025-07-12 14:16:13.243514 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 17 hours ago 628MB 2025-07-12 14:16:13.243532 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 17 hours ago 746MB 2025-07-12 14:16:13.243536 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 17 hours ago 318MB 2025-07-12 14:16:13.243540 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20250711 cb02c47a5187 17 hours ago 891MB 2025-07-12 14:16:13.243544 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20250711 0ac8facfe451 17 hours ago 360MB 2025-07-12 14:16:13.243548 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20250711 6c4eef6335f5 17 hours ago 456MB 2025-07-12 14:16:13.243552 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 17 hours ago 410MB 2025-07-12 14:16:13.243556 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 17 hours ago 358MB 2025-07-12 14:16:13.243560 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20250711.0 7b0f9e78b4e4 18 hours ago 575MB 2025-07-12 14:16:13.243579 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20250711.0 f677f8f8094b 19 hours ago 535MB 2025-07-12 14:16:13.243583 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20250711.0 8fcfa643b744 19 hours ago 308MB 2025-07-12 14:16:13.243587 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20250711.0 267f92fc46f6 19 hours ago 1.21GB 2025-07-12 14:16:13.243591 | orchestrator | registry.osism.tech/osism/osism 0.20250709.0 ccd699d89870 2 days ago 310MB 2025-07-12 14:16:13.243595 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.5-alpine 555db38b5b92 5 days ago 41.4MB 2025-07-12 14:16:13.243598 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.3 4113453efcb3 2 weeks ago 226MB 2025-07-12 14:16:13.243602 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.2 7fb85a4198e9 4 weeks ago 329MB 2025-07-12 14:16:13.243606 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 2 months ago 453MB 2025-07-12 14:16:13.243610 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 5 months ago 571MB 2025-07-12 14:16:13.243614 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 10 months ago 300MB 2025-07-12 14:16:13.243617 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 13 months ago 146MB 2025-07-12 14:16:13.516255 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-12 14:16:13.516448 | orchestrator | ++ semver 9.2.0 5.0.0 2025-07-12 14:16:13.575995 | orchestrator | 2025-07-12 14:16:13.576038 | orchestrator | ## Containers @ testbed-node-0 2025-07-12 14:16:13.576044 | orchestrator | 2025-07-12 14:16:13.576050 | orchestrator | + [[ 1 -eq -1 ]] 2025-07-12 14:16:13.576054 | orchestrator | + echo 2025-07-12 14:16:13.576059 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-07-12 14:16:13.576065 | orchestrator | + echo 2025-07-12 14:16:13.576069 | orchestrator | + osism container testbed-node-0 ps 2025-07-12 14:16:15.820586 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-12 14:16:15.820663 | orchestrator | 8762b2199bed registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-07-12 14:16:15.820670 | orchestrator | a45d2b769628 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-07-12 14:16:15.820675 | orchestrator | 21dcf68d2039 registry.osism.tech/kolla/release/nova-api:30.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_api 2025-07-12 14:16:15.820679 | orchestrator | a312103a1a6e registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-07-12 14:16:15.820683 | orchestrator | 0faec883077f registry.osism.tech/kolla/release/grafana:12.0.2.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2025-07-12 14:16:15.820688 | orchestrator | a4646992d6d7 registry.osism.tech/kolla/release/glance-api:29.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-07-12 14:16:15.820691 | orchestrator | d106f2d358b9 registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-07-12 14:16:15.820695 | orchestrator | 1cb43ae0721d registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_api 2025-07-12 14:16:15.820714 | orchestrator | 1d8bceed61ef registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2025-07-12 14:16:15.820728 | orchestrator | 1e4a9d7c5e87 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-07-12 14:16:15.820732 | orchestrator | ef1518a340ee registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-07-12 14:16:15.820736 | orchestrator | 008a77ea679c registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-07-12 14:16:15.820740 | orchestrator | f5b69a30553b registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-07-12 14:16:15.820744 | orchestrator | f8c2bbb41030 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2025-07-12 14:16:15.820747 | orchestrator | 4de288b5125f registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2025-07-12 14:16:15.820751 | orchestrator | 0f9e81052fdc registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-07-12 14:16:15.820755 | orchestrator | 1ba172bc9dd2 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2025-07-12 14:16:15.820759 | orchestrator | 399445d6c534 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-07-12 14:16:15.820763 | orchestrator | e0554a9b9a49 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-07-12 14:16:15.820779 | orchestrator | 13027f3aaa83 registry.osism.tech/kolla/release/designate-central:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-07-12 14:16:15.820784 | orchestrator | 978ed84f7f5c registry.osism.tech/kolla/release/designate-api:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-07-12 14:16:15.820788 | orchestrator | 52ced223639a registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2025-07-12 14:16:15.820792 | orchestrator | e5b4a4e81996 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_worker 2025-07-12 14:16:15.820795 | orchestrator | fd0263a724b0 registry.osism.tech/kolla/release/placement-api:12.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) placement_api 2025-07-12 14:16:15.820799 | orchestrator | 5e654a21cbe9 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2025-07-12 14:16:15.820803 | orchestrator | 3c526a032b51 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2025-07-12 14:16:15.820814 | orchestrator | 7d9f34bf681c registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-0 2025-07-12 14:16:15.820818 | orchestrator | ac74e71a4578 registry.osism.tech/kolla/release/keystone:26.0.1.20250711 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-07-12 14:16:15.820822 | orchestrator | 4e5d3e4cbf80 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-07-12 14:16:15.820826 | orchestrator | 218d8fc0b7ee registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-07-12 14:16:15.820833 | orchestrator | 8fbd9c13319f registry.osism.tech/kolla/release/horizon:25.1.1.20250711 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) horizon 2025-07-12 14:16:15.820837 | orchestrator | a5ad727a6828 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-07-12 14:16:15.820841 | orchestrator | 2204a3756022 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2025-07-12 14:16:15.820845 | orchestrator | c501e71499d3 registry.osism.tech/kolla/release/opensearch:2.19.2.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-07-12 14:16:15.820848 | orchestrator | 3230a6204144 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-0 2025-07-12 14:16:15.820852 | orchestrator | 83896037d249 registry.osism.tech/kolla/release/keepalived:2.2.7.20250711 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-07-12 14:16:15.820856 | orchestrator | 621d035806cb registry.osism.tech/kolla/release/proxysql:2.7.3.20250711 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-07-12 14:16:15.820860 | orchestrator | db21f01c8584 registry.osism.tech/kolla/release/haproxy:2.6.12.20250711 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-07-12 14:16:15.820866 | orchestrator | aa3daa8ad198 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2025-07-12 14:16:15.820870 | orchestrator | 2d2e037da8e4 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2025-07-12 14:16:15.820878 | orchestrator | f6000843b7c1 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_nb_db 2025-07-12 14:16:15.820882 | orchestrator | faa4cf40665e registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-0 2025-07-12 14:16:15.820886 | orchestrator | 430390406d67 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-07-12 14:16:15.820889 | orchestrator | a04bc17d128c registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2025-07-12 14:16:15.820899 | orchestrator | 92314732703f registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-07-12 14:16:15.820903 | orchestrator | 970207fd24a7 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2025-07-12 14:16:15.820907 | orchestrator | 35c05d2451fc registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2025-07-12 14:16:15.820911 | orchestrator | 59d6a0d978f1 registry.osism.tech/kolla/release/redis:7.0.15.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2025-07-12 14:16:15.820915 | orchestrator | 246abd19862a registry.osism.tech/kolla/release/memcached:1.6.18.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-07-12 14:16:15.820918 | orchestrator | 5173eb167c03 registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-07-12 14:16:15.820922 | orchestrator | afad3cbf4fee registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 32 minutes ago Up 32 minutes kolla_toolbox 2025-07-12 14:16:15.820926 | orchestrator | 5581d0cbbc7d registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-07-12 14:16:16.109771 | orchestrator | 2025-07-12 14:16:16.109837 | orchestrator | ## Images @ testbed-node-0 2025-07-12 14:16:16.109843 | orchestrator | 2025-07-12 14:16:16.109847 | orchestrator | + echo 2025-07-12 14:16:16.109852 | orchestrator | + echo '## Images @ testbed-node-0' 2025-07-12 14:16:16.109857 | orchestrator | + echo 2025-07-12 14:16:16.109862 | orchestrator | + osism container testbed-node-0 images 2025-07-12 14:16:18.313519 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-12 14:16:18.313639 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 17 hours ago 628MB 2025-07-12 14:16:18.313654 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250711 c7f6abdb2516 17 hours ago 329MB 2025-07-12 14:16:18.313666 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250711 0a9fd950fe86 17 hours ago 326MB 2025-07-12 14:16:18.313678 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250711 d8c44fac73c2 17 hours ago 1.59GB 2025-07-12 14:16:18.313689 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250711 db87020f3b90 17 hours ago 1.55GB 2025-07-12 14:16:18.313700 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250711 4c6eaa052643 17 hours ago 417MB 2025-07-12 14:16:18.313712 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250711 cd87896ace76 17 hours ago 318MB 2025-07-12 14:16:18.313724 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 17 hours ago 746MB 2025-07-12 14:16:18.313735 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250711 4ce47f209c9b 17 hours ago 375MB 2025-07-12 14:16:18.313746 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.2.20250711 f4164dfd1b02 17 hours ago 1.01GB 2025-07-12 14:16:18.313757 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 17 hours ago 318MB 2025-07-12 14:16:18.313769 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250711 15f29551e6ce 17 hours ago 361MB 2025-07-12 14:16:18.313806 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250711 ea9ea8f197d8 17 hours ago 361MB 2025-07-12 14:16:18.313817 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250711 d4ae4a297d3b 17 hours ago 1.21GB 2025-07-12 14:16:18.313828 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250711 142dafde994c 17 hours ago 353MB 2025-07-12 14:16:18.313840 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 17 hours ago 410MB 2025-07-12 14:16:18.313851 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250711 62e13ec7689a 17 hours ago 344MB 2025-07-12 14:16:18.313862 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 17 hours ago 358MB 2025-07-12 14:16:18.313874 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250711 534f393a19e2 17 hours ago 324MB 2025-07-12 14:16:18.313902 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250711 834c4c2dcd78 17 hours ago 351MB 2025-07-12 14:16:18.313914 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250711 d7d5c3586026 17 hours ago 324MB 2025-07-12 14:16:18.313925 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250711 5892b19e1064 17 hours ago 590MB 2025-07-12 14:16:18.313936 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250711 65e36d1176bd 17 hours ago 947MB 2025-07-12 14:16:18.313946 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250711 28654474dfe5 17 hours ago 946MB 2025-07-12 14:16:18.313957 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250711 58ad45688234 17 hours ago 947MB 2025-07-12 14:16:18.313968 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250711 affa47a97549 17 hours ago 946MB 2025-07-12 14:16:18.313979 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.0.20250711 05a4552273f6 17 hours ago 1.04GB 2025-07-12 14:16:18.313989 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.0.20250711 41f8c34132c7 17 hours ago 1.04GB 2025-07-12 14:16:18.314000 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250711 06deffb77b4f 17 hours ago 1.1GB 2025-07-12 14:16:18.314010 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250711 02867223fb33 17 hours ago 1.1GB 2025-07-12 14:16:18.314080 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250711 6146c08f2b76 17 hours ago 1.12GB 2025-07-12 14:16:18.314116 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250711 6d529ee19c1c 17 hours ago 1.1GB 2025-07-12 14:16:18.314128 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250711 b1ed239b634f 17 hours ago 1.12GB 2025-07-12 14:16:18.314139 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250711 65a4d0afbb1c 17 hours ago 1.15GB 2025-07-12 14:16:18.314149 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250711 2b6bd346ad18 17 hours ago 1.04GB 2025-07-12 14:16:18.314160 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250711 1b7dd2682590 17 hours ago 1.06GB 2025-07-12 14:16:18.314171 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250711 e475391ce44d 17 hours ago 1.06GB 2025-07-12 14:16:18.314182 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250711 09290580fa03 17 hours ago 1.06GB 2025-07-12 14:16:18.314193 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250711 a09a8be1b711 17 hours ago 1.41GB 2025-07-12 14:16:18.314214 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250711 c0d28e8febb9 17 hours ago 1.41GB 2025-07-12 14:16:18.314225 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250711 e0ad0ae52bef 17 hours ago 1.29GB 2025-07-12 14:16:18.314236 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250711 b395cfe7f13f 17 hours ago 1.42GB 2025-07-12 14:16:18.314246 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250711 ee83c124eb76 17 hours ago 1.29GB 2025-07-12 14:16:18.314257 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250711 44e25b162470 17 hours ago 1.29GB 2025-07-12 14:16:18.314274 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250711 71f47d2b2def 17 hours ago 1.2GB 2025-07-12 14:16:18.314286 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250711 13b61cb4a5d2 17 hours ago 1.31GB 2025-07-12 14:16:18.314297 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250711 a030b794eaa9 17 hours ago 1.05GB 2025-07-12 14:16:18.314308 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250711 2d0954c30848 17 hours ago 1.05GB 2025-07-12 14:16:18.314319 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250711 f7fa0bcabe47 17 hours ago 1.05GB 2025-07-12 14:16:18.314330 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250711 4de726ebba0e 17 hours ago 1.06GB 2025-07-12 14:16:18.314341 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250711 a14c6ace0b24 17 hours ago 1.06GB 2025-07-12 14:16:18.314352 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250711 2a2b32cdb83f 17 hours ago 1.05GB 2025-07-12 14:16:18.314363 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20250711 f2e37439c6b7 17 hours ago 1.11GB 2025-07-12 14:16:18.314394 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20250711 b3d19c53d4de 17 hours ago 1.11GB 2025-07-12 14:16:18.314405 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250711 53889b0cb73d 17 hours ago 1.11GB 2025-07-12 14:16:18.314416 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250711 caf4f12b4799 17 hours ago 1.13GB 2025-07-12 14:16:18.314427 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250711 3ba6da1abaea 17 hours ago 1.11GB 2025-07-12 14:16:18.314438 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250711 8377b7d24f73 17 hours ago 1.24GB 2025-07-12 14:16:18.314448 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20250711 c26d685bbc69 17 hours ago 1.04GB 2025-07-12 14:16:18.314459 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20250711 55a7448b63ad 17 hours ago 1.04GB 2025-07-12 14:16:18.314470 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20250711 b8a4d60cb725 17 hours ago 1.04GB 2025-07-12 14:16:18.314481 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20250711 c0822bfcb81c 17 hours ago 1.04GB 2025-07-12 14:16:18.314492 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 2 months ago 1.27GB 2025-07-12 14:16:18.662466 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-12 14:16:18.663501 | orchestrator | ++ semver 9.2.0 5.0.0 2025-07-12 14:16:18.716117 | orchestrator | 2025-07-12 14:16:18.716196 | orchestrator | ## Containers @ testbed-node-1 2025-07-12 14:16:18.716235 | orchestrator | 2025-07-12 14:16:18.716246 | orchestrator | + [[ 1 -eq -1 ]] 2025-07-12 14:16:18.716258 | orchestrator | + echo 2025-07-12 14:16:18.716271 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-07-12 14:16:18.716283 | orchestrator | + echo 2025-07-12 14:16:18.716294 | orchestrator | + osism container testbed-node-1 ps 2025-07-12 14:16:20.920310 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-12 14:16:20.920477 | orchestrator | bb28a05891bf registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-07-12 14:16:20.920495 | orchestrator | e8aa0a539e8c registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-07-12 14:16:20.920508 | orchestrator | 5b593343c45c registry.osism.tech/kolla/release/grafana:12.0.2.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-07-12 14:16:20.920519 | orchestrator | 9cb6b68587e8 registry.osism.tech/kolla/release/nova-api:30.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_api 2025-07-12 14:16:20.920549 | orchestrator | 08ad7f71ab9f registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-07-12 14:16:20.920561 | orchestrator | 0473de23a576 registry.osism.tech/kolla/release/glance-api:29.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-07-12 14:16:20.920573 | orchestrator | 8c8a204035b6 registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-07-12 14:16:20.920584 | orchestrator | 20e35daaf983 registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-07-12 14:16:20.920595 | orchestrator | d7c56cc88edc registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2025-07-12 14:16:20.920609 | orchestrator | 230b25c7b6bf registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-07-12 14:16:20.920620 | orchestrator | 68b533cff636 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-07-12 14:16:20.920631 | orchestrator | 5dabb47d05b9 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-07-12 14:16:20.920643 | orchestrator | 0c6e914b4c2b registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-07-12 14:16:20.920654 | orchestrator | 3cfc874e5622 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2025-07-12 14:16:20.920665 | orchestrator | e81d917f36e0 registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-07-12 14:16:20.920676 | orchestrator | cab81f958139 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2025-07-12 14:16:20.920710 | orchestrator | 09ec13da7fda registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2025-07-12 14:16:20.920722 | orchestrator | 524eb28e96d4 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-07-12 14:16:20.920733 | orchestrator | b54220b4e4a1 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-07-12 14:16:20.920764 | orchestrator | 1406a2943129 registry.osism.tech/kolla/release/designate-central:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-07-12 14:16:20.920776 | orchestrator | 574749ea9fd1 registry.osism.tech/kolla/release/designate-api:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-07-12 14:16:20.920787 | orchestrator | 2eb4eb77bfdb registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_backend_bind9 2025-07-12 14:16:20.920798 | orchestrator | 8ca4c8462210 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_worker 2025-07-12 14:16:20.920814 | orchestrator | 0f4d21abb0cd registry.osism.tech/kolla/release/placement-api:12.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) placement_api 2025-07-12 14:16:20.920833 | orchestrator | b8d040a50311 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2025-07-12 14:16:20.920846 | orchestrator | 024ef72648f4 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2025-07-12 14:16:20.920859 | orchestrator | 16c5941b25a8 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-1 2025-07-12 14:16:20.920872 | orchestrator | 1e38655f8742 registry.osism.tech/kolla/release/keystone:26.0.1.20250711 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-07-12 14:16:20.920885 | orchestrator | 43974a66a969 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-07-12 14:16:20.920898 | orchestrator | 25f44d30fb5d registry.osism.tech/kolla/release/horizon:25.1.1.20250711 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-07-12 14:16:20.920910 | orchestrator | 53934058f346 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-07-12 14:16:20.920922 | orchestrator | 4ab9115749ba registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-07-12 14:16:20.920935 | orchestrator | ce1cc8c37a42 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2025-07-12 14:16:20.920947 | orchestrator | c57a050fcdcc registry.osism.tech/kolla/release/opensearch:2.19.2.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-07-12 14:16:20.920968 | orchestrator | e1e3e012e7d4 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-1 2025-07-12 14:16:20.920980 | orchestrator | 5c6e80b3956d registry.osism.tech/kolla/release/keepalived:2.2.7.20250711 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-07-12 14:16:20.920992 | orchestrator | 0d6f945d72c1 registry.osism.tech/kolla/release/proxysql:2.7.3.20250711 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-07-12 14:16:20.921005 | orchestrator | 36c9ada4aeb5 registry.osism.tech/kolla/release/haproxy:2.6.12.20250711 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-07-12 14:16:20.921017 | orchestrator | acb07f70f62a registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2025-07-12 14:16:20.921029 | orchestrator | 7cc416424dfd registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2025-07-12 14:16:20.921048 | orchestrator | fdfb978846af registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_nb_db 2025-07-12 14:16:20.921061 | orchestrator | cf0f8656c73f registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-07-12 14:16:20.921072 | orchestrator | c920b1fc011a registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-1 2025-07-12 14:16:20.921083 | orchestrator | 999605d12dab registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-07-12 14:16:20.921095 | orchestrator | ae6ab030ea34 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-07-12 14:16:20.921106 | orchestrator | 64fad57e4813 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2025-07-12 14:16:20.921117 | orchestrator | fc36bdc081a4 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2025-07-12 14:16:20.921128 | orchestrator | 66d135423949 registry.osism.tech/kolla/release/redis:7.0.15.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2025-07-12 14:16:20.921144 | orchestrator | 60faa5534bd2 registry.osism.tech/kolla/release/memcached:1.6.18.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-07-12 14:16:20.921155 | orchestrator | 85fecd6e54e4 registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-07-12 14:16:20.921166 | orchestrator | 60c449612bdf registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-07-12 14:16:20.921178 | orchestrator | 713eec7e4b31 registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-07-12 14:16:21.201310 | orchestrator | 2025-07-12 14:16:21.201452 | orchestrator | ## Images @ testbed-node-1 2025-07-12 14:16:21.201468 | orchestrator | 2025-07-12 14:16:21.201505 | orchestrator | + echo 2025-07-12 14:16:21.201517 | orchestrator | + echo '## Images @ testbed-node-1' 2025-07-12 14:16:21.201530 | orchestrator | + echo 2025-07-12 14:16:21.201541 | orchestrator | + osism container testbed-node-1 images 2025-07-12 14:16:23.439782 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-12 14:16:23.439886 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 17 hours ago 628MB 2025-07-12 14:16:23.439902 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250711 c7f6abdb2516 17 hours ago 329MB 2025-07-12 14:16:23.439915 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250711 0a9fd950fe86 17 hours ago 326MB 2025-07-12 14:16:23.439926 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250711 d8c44fac73c2 17 hours ago 1.59GB 2025-07-12 14:16:23.439937 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250711 db87020f3b90 17 hours ago 1.55GB 2025-07-12 14:16:23.439948 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250711 4c6eaa052643 17 hours ago 417MB 2025-07-12 14:16:23.439959 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250711 cd87896ace76 17 hours ago 318MB 2025-07-12 14:16:23.439971 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 17 hours ago 746MB 2025-07-12 14:16:23.439982 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250711 4ce47f209c9b 17 hours ago 375MB 2025-07-12 14:16:23.439993 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.2.20250711 f4164dfd1b02 17 hours ago 1.01GB 2025-07-12 14:16:23.440004 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 17 hours ago 318MB 2025-07-12 14:16:23.440015 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250711 15f29551e6ce 17 hours ago 361MB 2025-07-12 14:16:23.440026 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250711 ea9ea8f197d8 17 hours ago 361MB 2025-07-12 14:16:23.440037 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250711 d4ae4a297d3b 17 hours ago 1.21GB 2025-07-12 14:16:23.440048 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250711 142dafde994c 17 hours ago 353MB 2025-07-12 14:16:23.440059 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 17 hours ago 410MB 2025-07-12 14:16:23.440070 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250711 62e13ec7689a 17 hours ago 344MB 2025-07-12 14:16:23.440081 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 17 hours ago 358MB 2025-07-12 14:16:23.440092 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250711 834c4c2dcd78 17 hours ago 351MB 2025-07-12 14:16:23.440103 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250711 534f393a19e2 17 hours ago 324MB 2025-07-12 14:16:23.440115 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250711 d7d5c3586026 17 hours ago 324MB 2025-07-12 14:16:23.440126 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250711 5892b19e1064 17 hours ago 590MB 2025-07-12 14:16:23.440137 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250711 65e36d1176bd 17 hours ago 947MB 2025-07-12 14:16:23.440148 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250711 28654474dfe5 17 hours ago 946MB 2025-07-12 14:16:23.440183 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250711 58ad45688234 17 hours ago 947MB 2025-07-12 14:16:23.440195 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250711 affa47a97549 17 hours ago 946MB 2025-07-12 14:16:23.440206 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250711 65a4d0afbb1c 17 hours ago 1.15GB 2025-07-12 14:16:23.440217 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250711 2b6bd346ad18 17 hours ago 1.04GB 2025-07-12 14:16:23.440228 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250711 1b7dd2682590 17 hours ago 1.06GB 2025-07-12 14:16:23.440239 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250711 e475391ce44d 17 hours ago 1.06GB 2025-07-12 14:16:23.440250 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250711 09290580fa03 17 hours ago 1.06GB 2025-07-12 14:16:23.440280 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250711 a09a8be1b711 17 hours ago 1.41GB 2025-07-12 14:16:23.440292 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250711 c0d28e8febb9 17 hours ago 1.41GB 2025-07-12 14:16:23.440320 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250711 e0ad0ae52bef 17 hours ago 1.29GB 2025-07-12 14:16:23.440334 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250711 b395cfe7f13f 17 hours ago 1.42GB 2025-07-12 14:16:23.440346 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250711 ee83c124eb76 17 hours ago 1.29GB 2025-07-12 14:16:23.440358 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250711 44e25b162470 17 hours ago 1.29GB 2025-07-12 14:16:23.440371 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250711 71f47d2b2def 17 hours ago 1.2GB 2025-07-12 14:16:23.440421 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250711 13b61cb4a5d2 17 hours ago 1.31GB 2025-07-12 14:16:23.440441 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250711 a030b794eaa9 17 hours ago 1.05GB 2025-07-12 14:16:23.440453 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250711 2d0954c30848 17 hours ago 1.05GB 2025-07-12 14:16:23.440465 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250711 f7fa0bcabe47 17 hours ago 1.05GB 2025-07-12 14:16:23.440478 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250711 4de726ebba0e 17 hours ago 1.06GB 2025-07-12 14:16:23.440490 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250711 a14c6ace0b24 17 hours ago 1.06GB 2025-07-12 14:16:23.440503 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250711 2a2b32cdb83f 17 hours ago 1.05GB 2025-07-12 14:16:23.440515 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250711 53889b0cb73d 17 hours ago 1.11GB 2025-07-12 14:16:23.440528 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250711 caf4f12b4799 17 hours ago 1.13GB 2025-07-12 14:16:23.440540 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250711 3ba6da1abaea 17 hours ago 1.11GB 2025-07-12 14:16:23.440553 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250711 8377b7d24f73 17 hours ago 1.24GB 2025-07-12 14:16:23.440565 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 2 months ago 1.27GB 2025-07-12 14:16:23.748542 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-12 14:16:23.748820 | orchestrator | ++ semver 9.2.0 5.0.0 2025-07-12 14:16:23.800284 | orchestrator | 2025-07-12 14:16:23.800408 | orchestrator | ## Containers @ testbed-node-2 2025-07-12 14:16:23.800427 | orchestrator | 2025-07-12 14:16:23.800439 | orchestrator | + [[ 1 -eq -1 ]] 2025-07-12 14:16:23.800451 | orchestrator | + echo 2025-07-12 14:16:23.800463 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-07-12 14:16:23.800475 | orchestrator | + echo 2025-07-12 14:16:23.800486 | orchestrator | + osism container testbed-node-2 ps 2025-07-12 14:16:26.051322 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-12 14:16:26.051410 | orchestrator | 0883035c6493 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-07-12 14:16:26.051426 | orchestrator | d3ad468641b5 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-07-12 14:16:26.051437 | orchestrator | 2e6dc38bb13d registry.osism.tech/kolla/release/grafana:12.0.2.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2025-07-12 14:16:26.051449 | orchestrator | 4a1868913afc registry.osism.tech/kolla/release/nova-api:30.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_api 2025-07-12 14:16:26.051460 | orchestrator | e486e0ae9a83 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-07-12 14:16:26.051471 | orchestrator | 9ecbcc3e60f7 registry.osism.tech/kolla/release/glance-api:29.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-07-12 14:16:26.051482 | orchestrator | ae477c74bbb9 registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-07-12 14:16:26.051494 | orchestrator | 8c0fd466e171 registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711 "dumb-init --single-…" 12 minutes ago Up 11 minutes (healthy) cinder_api 2025-07-12 14:16:26.051505 | orchestrator | 1d0f476429ff registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2025-07-12 14:16:26.051518 | orchestrator | 9daa981ce279 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-07-12 14:16:26.051529 | orchestrator | 7627a831dd66 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-07-12 14:16:26.051540 | orchestrator | dbf4f8e975f1 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-07-12 14:16:26.051552 | orchestrator | a4852d18e762 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-07-12 14:16:26.051563 | orchestrator | 9454664894e8 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2025-07-12 14:16:26.051574 | orchestrator | 4e66c15e4854 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2025-07-12 14:16:26.051612 | orchestrator | ccbbdc3c6781 registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-07-12 14:16:26.051639 | orchestrator | 293887df2a11 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2025-07-12 14:16:26.051650 | orchestrator | c9e1bdf7b65f registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-07-12 14:16:26.051662 | orchestrator | 016076cba6dd registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-07-12 14:16:26.051690 | orchestrator | 9d7abbe1263c registry.osism.tech/kolla/release/designate-central:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-07-12 14:16:26.051701 | orchestrator | 9eef8a3ddc1d registry.osism.tech/kolla/release/designate-api:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-07-12 14:16:26.051713 | orchestrator | 1eb69f285290 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_backend_bind9 2025-07-12 14:16:26.051724 | orchestrator | 99e6b69923f1 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_worker 2025-07-12 14:16:26.051735 | orchestrator | fcdd38b6e7ac registry.osism.tech/kolla/release/placement-api:12.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) placement_api 2025-07-12 14:16:26.051746 | orchestrator | 34d848fa6fbb registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2025-07-12 14:16:26.051757 | orchestrator | d449637c8bcb registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2025-07-12 14:16:26.051768 | orchestrator | 68396caf1d8b registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-2 2025-07-12 14:16:26.051779 | orchestrator | 4b911462fada registry.osism.tech/kolla/release/keystone:26.0.1.20250711 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-07-12 14:16:26.051790 | orchestrator | d2f01aff6713 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-07-12 14:16:26.051801 | orchestrator | d9413007d564 registry.osism.tech/kolla/release/horizon:25.1.1.20250711 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-07-12 14:16:26.051812 | orchestrator | 762f59026b59 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-07-12 14:16:26.051823 | orchestrator | 55894a5c5e37 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-07-12 14:16:26.051834 | orchestrator | 021fd998ccfc registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2025-07-12 14:16:26.051845 | orchestrator | b852bbb82f92 registry.osism.tech/kolla/release/opensearch:2.19.2.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-07-12 14:16:26.051863 | orchestrator | 54c64c9ba246 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-2 2025-07-12 14:16:26.051875 | orchestrator | 83308ec27309 registry.osism.tech/kolla/release/keepalived:2.2.7.20250711 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-07-12 14:16:26.051888 | orchestrator | 5cc5d581268c registry.osism.tech/kolla/release/proxysql:2.7.3.20250711 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-07-12 14:16:26.051900 | orchestrator | 36d92e06126e registry.osism.tech/kolla/release/haproxy:2.6.12.20250711 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) haproxy 2025-07-12 14:16:26.051913 | orchestrator | 028ef125e3d1 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_northd 2025-07-12 14:16:26.051926 | orchestrator | 7fbb5657635c registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_sb_db 2025-07-12 14:16:26.051943 | orchestrator | 7ff7485941eb registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_nb_db 2025-07-12 14:16:26.051956 | orchestrator | 52a98b4bc32b registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-07-12 14:16:26.051969 | orchestrator | 2147566cbeba registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-07-12 14:16:26.051982 | orchestrator | 32b9be9ee837 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-2 2025-07-12 14:16:26.051995 | orchestrator | 15751fdd5e73 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-07-12 14:16:26.052008 | orchestrator | 925e074893db registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2025-07-12 14:16:26.052020 | orchestrator | 0c2143fe46f1 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2025-07-12 14:16:26.052033 | orchestrator | 5339784cdb5c registry.osism.tech/kolla/release/redis:7.0.15.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2025-07-12 14:16:26.052046 | orchestrator | 5cb0434f9cf2 registry.osism.tech/kolla/release/memcached:1.6.18.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-07-12 14:16:26.052065 | orchestrator | 61ef33bdd353 registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-07-12 14:16:26.052078 | orchestrator | 1da8846b3e36 registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-07-12 14:16:26.052091 | orchestrator | ef15b1581929 registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-07-12 14:16:26.333052 | orchestrator | 2025-07-12 14:16:26.333154 | orchestrator | ## Images @ testbed-node-2 2025-07-12 14:16:26.333171 | orchestrator | 2025-07-12 14:16:26.333184 | orchestrator | + echo 2025-07-12 14:16:26.333197 | orchestrator | + echo '## Images @ testbed-node-2' 2025-07-12 14:16:26.333209 | orchestrator | + echo 2025-07-12 14:16:26.333220 | orchestrator | + osism container testbed-node-2 images 2025-07-12 14:16:28.599691 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-12 14:16:28.599792 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 17 hours ago 628MB 2025-07-12 14:16:28.599822 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250711 c7f6abdb2516 17 hours ago 329MB 2025-07-12 14:16:28.599835 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250711 0a9fd950fe86 17 hours ago 326MB 2025-07-12 14:16:28.599846 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250711 d8c44fac73c2 17 hours ago 1.59GB 2025-07-12 14:16:28.599857 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250711 db87020f3b90 17 hours ago 1.55GB 2025-07-12 14:16:28.599869 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250711 4c6eaa052643 17 hours ago 417MB 2025-07-12 14:16:28.599879 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250711 cd87896ace76 17 hours ago 318MB 2025-07-12 14:16:28.599891 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 17 hours ago 746MB 2025-07-12 14:16:28.599902 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250711 4ce47f209c9b 17 hours ago 375MB 2025-07-12 14:16:28.599913 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.2.20250711 f4164dfd1b02 17 hours ago 1.01GB 2025-07-12 14:16:28.599924 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 17 hours ago 318MB 2025-07-12 14:16:28.599935 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250711 15f29551e6ce 17 hours ago 361MB 2025-07-12 14:16:28.599946 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250711 ea9ea8f197d8 17 hours ago 361MB 2025-07-12 14:16:28.599957 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250711 d4ae4a297d3b 17 hours ago 1.21GB 2025-07-12 14:16:28.599968 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250711 142dafde994c 17 hours ago 353MB 2025-07-12 14:16:28.599980 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 17 hours ago 410MB 2025-07-12 14:16:28.599991 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250711 62e13ec7689a 17 hours ago 344MB 2025-07-12 14:16:28.600002 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 17 hours ago 358MB 2025-07-12 14:16:28.600013 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250711 534f393a19e2 17 hours ago 324MB 2025-07-12 14:16:28.600024 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250711 834c4c2dcd78 17 hours ago 351MB 2025-07-12 14:16:28.600035 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250711 d7d5c3586026 17 hours ago 324MB 2025-07-12 14:16:28.600046 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250711 5892b19e1064 17 hours ago 590MB 2025-07-12 14:16:28.600056 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250711 65e36d1176bd 17 hours ago 947MB 2025-07-12 14:16:28.600094 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250711 28654474dfe5 17 hours ago 946MB 2025-07-12 14:16:28.600105 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250711 58ad45688234 17 hours ago 947MB 2025-07-12 14:16:28.600116 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250711 affa47a97549 17 hours ago 946MB 2025-07-12 14:16:28.600127 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250711 65a4d0afbb1c 17 hours ago 1.15GB 2025-07-12 14:16:28.600138 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250711 2b6bd346ad18 17 hours ago 1.04GB 2025-07-12 14:16:28.600149 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250711 1b7dd2682590 17 hours ago 1.06GB 2025-07-12 14:16:28.600159 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250711 e475391ce44d 17 hours ago 1.06GB 2025-07-12 14:16:28.600170 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250711 09290580fa03 17 hours ago 1.06GB 2025-07-12 14:16:28.600199 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250711 a09a8be1b711 17 hours ago 1.41GB 2025-07-12 14:16:28.600210 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250711 c0d28e8febb9 17 hours ago 1.41GB 2025-07-12 14:16:28.600222 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250711 e0ad0ae52bef 17 hours ago 1.29GB 2025-07-12 14:16:28.600236 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250711 b395cfe7f13f 17 hours ago 1.42GB 2025-07-12 14:16:28.600248 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250711 ee83c124eb76 17 hours ago 1.29GB 2025-07-12 14:16:28.600260 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250711 44e25b162470 17 hours ago 1.29GB 2025-07-12 14:16:28.600273 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250711 71f47d2b2def 17 hours ago 1.2GB 2025-07-12 14:16:28.600285 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250711 13b61cb4a5d2 17 hours ago 1.31GB 2025-07-12 14:16:28.600297 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250711 a030b794eaa9 17 hours ago 1.05GB 2025-07-12 14:16:28.600309 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250711 2d0954c30848 17 hours ago 1.05GB 2025-07-12 14:16:28.600322 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250711 f7fa0bcabe47 17 hours ago 1.05GB 2025-07-12 14:16:28.600334 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250711 4de726ebba0e 17 hours ago 1.06GB 2025-07-12 14:16:28.600346 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250711 a14c6ace0b24 17 hours ago 1.06GB 2025-07-12 14:16:28.600359 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250711 2a2b32cdb83f 17 hours ago 1.05GB 2025-07-12 14:16:28.600371 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250711 53889b0cb73d 17 hours ago 1.11GB 2025-07-12 14:16:28.600419 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250711 caf4f12b4799 17 hours ago 1.13GB 2025-07-12 14:16:28.600432 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250711 3ba6da1abaea 17 hours ago 1.11GB 2025-07-12 14:16:28.600444 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250711 8377b7d24f73 17 hours ago 1.24GB 2025-07-12 14:16:28.600457 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 2 months ago 1.27GB 2025-07-12 14:16:28.899946 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-07-12 14:16:28.906563 | orchestrator | + set -e 2025-07-12 14:16:28.906602 | orchestrator | + source /opt/manager-vars.sh 2025-07-12 14:16:28.908157 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-12 14:16:28.908179 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-12 14:16:28.908191 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-12 14:16:28.908202 | orchestrator | ++ CEPH_VERSION=reef 2025-07-12 14:16:28.908214 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-12 14:16:28.908246 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-12 14:16:28.908258 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-07-12 14:16:28.908269 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-07-12 14:16:28.908280 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-12 14:16:28.908291 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-12 14:16:28.908302 | orchestrator | ++ export ARA=false 2025-07-12 14:16:28.908313 | orchestrator | ++ ARA=false 2025-07-12 14:16:28.908325 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-12 14:16:28.908336 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-12 14:16:28.908347 | orchestrator | ++ export TEMPEST=false 2025-07-12 14:16:28.908358 | orchestrator | ++ TEMPEST=false 2025-07-12 14:16:28.908369 | orchestrator | ++ export IS_ZUUL=true 2025-07-12 14:16:28.908380 | orchestrator | ++ IS_ZUUL=true 2025-07-12 14:16:28.908413 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2025-07-12 14:16:28.908425 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2025-07-12 14:16:28.908436 | orchestrator | ++ export EXTERNAL_API=false 2025-07-12 14:16:28.908446 | orchestrator | ++ EXTERNAL_API=false 2025-07-12 14:16:28.908457 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-12 14:16:28.908468 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-12 14:16:28.908479 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-12 14:16:28.908490 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-12 14:16:28.908501 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-12 14:16:28.908511 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-12 14:16:28.908523 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-07-12 14:16:28.908534 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-07-12 14:16:28.914915 | orchestrator | + set -e 2025-07-12 14:16:28.914947 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-12 14:16:28.914959 | orchestrator | ++ export INTERACTIVE=false 2025-07-12 14:16:28.914970 | orchestrator | ++ INTERACTIVE=false 2025-07-12 14:16:28.914981 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-12 14:16:28.914992 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-12 14:16:28.915004 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-07-12 14:16:28.916972 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-07-12 14:16:28.919919 | orchestrator | 2025-07-12 14:16:28.919967 | orchestrator | # Ceph status 2025-07-12 14:16:28.919984 | orchestrator | 2025-07-12 14:16:28.919996 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-07-12 14:16:28.920008 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-07-12 14:16:28.920021 | orchestrator | + echo 2025-07-12 14:16:28.920037 | orchestrator | + echo '# Ceph status' 2025-07-12 14:16:28.920050 | orchestrator | + echo 2025-07-12 14:16:28.920062 | orchestrator | + ceph -s 2025-07-12 14:16:29.473714 | orchestrator | cluster: 2025-07-12 14:16:29.473820 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-07-12 14:16:29.473846 | orchestrator | health: HEALTH_OK 2025-07-12 14:16:29.473868 | orchestrator | 2025-07-12 14:16:29.473889 | orchestrator | services: 2025-07-12 14:16:29.473903 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 28m) 2025-07-12 14:16:29.473928 | orchestrator | mgr: testbed-node-1(active, since 16m), standbys: testbed-node-2, testbed-node-0 2025-07-12 14:16:29.473940 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-07-12 14:16:29.473952 | orchestrator | osd: 6 osds: 6 up (since 24m), 6 in (since 25m) 2025-07-12 14:16:29.473963 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-07-12 14:16:29.473975 | orchestrator | 2025-07-12 14:16:29.473986 | orchestrator | data: 2025-07-12 14:16:29.473997 | orchestrator | volumes: 1/1 healthy 2025-07-12 14:16:29.474008 | orchestrator | pools: 14 pools, 401 pgs 2025-07-12 14:16:29.474080 | orchestrator | objects: 524 objects, 2.2 GiB 2025-07-12 14:16:29.474092 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-07-12 14:16:29.474104 | orchestrator | pgs: 401 active+clean 2025-07-12 14:16:29.474115 | orchestrator | 2025-07-12 14:16:29.525832 | orchestrator | 2025-07-12 14:16:29.525892 | orchestrator | # Ceph versions 2025-07-12 14:16:29.525905 | orchestrator | 2025-07-12 14:16:29.525917 | orchestrator | + echo 2025-07-12 14:16:29.525929 | orchestrator | + echo '# Ceph versions' 2025-07-12 14:16:29.525941 | orchestrator | + echo 2025-07-12 14:16:29.525977 | orchestrator | + ceph versions 2025-07-12 14:16:30.105212 | orchestrator | { 2025-07-12 14:16:30.105312 | orchestrator | "mon": { 2025-07-12 14:16:30.105327 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-12 14:16:30.105340 | orchestrator | }, 2025-07-12 14:16:30.105351 | orchestrator | "mgr": { 2025-07-12 14:16:30.105362 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-12 14:16:30.105373 | orchestrator | }, 2025-07-12 14:16:30.105430 | orchestrator | "osd": { 2025-07-12 14:16:30.105443 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-07-12 14:16:30.105454 | orchestrator | }, 2025-07-12 14:16:30.105465 | orchestrator | "mds": { 2025-07-12 14:16:30.105476 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-12 14:16:30.105487 | orchestrator | }, 2025-07-12 14:16:30.105498 | orchestrator | "rgw": { 2025-07-12 14:16:30.105509 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-12 14:16:30.105520 | orchestrator | }, 2025-07-12 14:16:30.105531 | orchestrator | "overall": { 2025-07-12 14:16:30.105543 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-07-12 14:16:30.105554 | orchestrator | } 2025-07-12 14:16:30.105565 | orchestrator | } 2025-07-12 14:16:30.150133 | orchestrator | 2025-07-12 14:16:30.150184 | orchestrator | # Ceph OSD tree 2025-07-12 14:16:30.150197 | orchestrator | 2025-07-12 14:16:30.150209 | orchestrator | + echo 2025-07-12 14:16:30.150220 | orchestrator | + echo '# Ceph OSD tree' 2025-07-12 14:16:30.150233 | orchestrator | + echo 2025-07-12 14:16:30.150244 | orchestrator | + ceph osd df tree 2025-07-12 14:16:30.665991 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-07-12 14:16:30.666167 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-07-12 14:16:30.666182 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-07-12 14:16:30.666195 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.96 1.18 209 up osd.1 2025-07-12 14:16:30.666206 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 996 MiB 923 MiB 1 KiB 74 MiB 19 GiB 4.87 0.82 181 up osd.5 2025-07-12 14:16:30.666217 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-07-12 14:16:30.666228 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 74 MiB 19 GiB 5.54 0.94 186 up osd.0 2025-07-12 14:16:30.666239 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.30 1.06 202 up osd.4 2025-07-12 14:16:30.666250 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-07-12 14:16:30.666261 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 74 MiB 19 GiB 5.93 1.00 192 up osd.2 2025-07-12 14:16:30.666272 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 70 MiB 19 GiB 5.91 1.00 200 up osd.3 2025-07-12 14:16:30.666283 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-07-12 14:16:30.666295 | orchestrator | MIN/MAX VAR: 0.82/1.18 STDDEV: 0.64 2025-07-12 14:16:30.719448 | orchestrator | 2025-07-12 14:16:30.719489 | orchestrator | # Ceph monitor status 2025-07-12 14:16:30.719502 | orchestrator | 2025-07-12 14:16:30.719514 | orchestrator | + echo 2025-07-12 14:16:30.719526 | orchestrator | + echo '# Ceph monitor status' 2025-07-12 14:16:30.719537 | orchestrator | + echo 2025-07-12 14:16:30.719548 | orchestrator | + ceph mon stat 2025-07-12 14:16:31.301938 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-07-12 14:16:31.344590 | orchestrator | 2025-07-12 14:16:31.344692 | orchestrator | # Ceph quorum status 2025-07-12 14:16:31.344717 | orchestrator | 2025-07-12 14:16:31.344729 | orchestrator | + echo 2025-07-12 14:16:31.344741 | orchestrator | + echo '# Ceph quorum status' 2025-07-12 14:16:31.344753 | orchestrator | + echo 2025-07-12 14:16:31.344937 | orchestrator | + ceph quorum_status 2025-07-12 14:16:31.345340 | orchestrator | + jq 2025-07-12 14:16:32.001202 | orchestrator | { 2025-07-12 14:16:32.001295 | orchestrator | "election_epoch": 8, 2025-07-12 14:16:32.001312 | orchestrator | "quorum": [ 2025-07-12 14:16:32.001325 | orchestrator | 0, 2025-07-12 14:16:32.001337 | orchestrator | 1, 2025-07-12 14:16:32.001348 | orchestrator | 2 2025-07-12 14:16:32.001358 | orchestrator | ], 2025-07-12 14:16:32.001369 | orchestrator | "quorum_names": [ 2025-07-12 14:16:32.001381 | orchestrator | "testbed-node-0", 2025-07-12 14:16:32.001626 | orchestrator | "testbed-node-1", 2025-07-12 14:16:32.001644 | orchestrator | "testbed-node-2" 2025-07-12 14:16:32.001663 | orchestrator | ], 2025-07-12 14:16:32.001681 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-07-12 14:16:32.001700 | orchestrator | "quorum_age": 1732, 2025-07-12 14:16:32.001727 | orchestrator | "features": { 2025-07-12 14:16:32.001748 | orchestrator | "quorum_con": "4540138322906710015", 2025-07-12 14:16:32.001767 | orchestrator | "quorum_mon": [ 2025-07-12 14:16:32.001784 | orchestrator | "kraken", 2025-07-12 14:16:32.001801 | orchestrator | "luminous", 2025-07-12 14:16:32.001819 | orchestrator | "mimic", 2025-07-12 14:16:32.001837 | orchestrator | "osdmap-prune", 2025-07-12 14:16:32.001856 | orchestrator | "nautilus", 2025-07-12 14:16:32.001874 | orchestrator | "octopus", 2025-07-12 14:16:32.001891 | orchestrator | "pacific", 2025-07-12 14:16:32.001909 | orchestrator | "elector-pinging", 2025-07-12 14:16:32.001926 | orchestrator | "quincy", 2025-07-12 14:16:32.001944 | orchestrator | "reef" 2025-07-12 14:16:32.001962 | orchestrator | ] 2025-07-12 14:16:32.001982 | orchestrator | }, 2025-07-12 14:16:32.002000 | orchestrator | "monmap": { 2025-07-12 14:16:32.002081 | orchestrator | "epoch": 1, 2025-07-12 14:16:32.002111 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-07-12 14:16:32.002138 | orchestrator | "modified": "2025-07-12T13:47:21.448916Z", 2025-07-12 14:16:32.002163 | orchestrator | "created": "2025-07-12T13:47:21.448916Z", 2025-07-12 14:16:32.002182 | orchestrator | "min_mon_release": 18, 2025-07-12 14:16:32.002201 | orchestrator | "min_mon_release_name": "reef", 2025-07-12 14:16:32.002213 | orchestrator | "election_strategy": 1, 2025-07-12 14:16:32.002226 | orchestrator | "disallowed_leaders: ": "", 2025-07-12 14:16:32.002239 | orchestrator | "stretch_mode": false, 2025-07-12 14:16:32.002252 | orchestrator | "tiebreaker_mon": "", 2025-07-12 14:16:32.002264 | orchestrator | "removed_ranks: ": "", 2025-07-12 14:16:32.002276 | orchestrator | "features": { 2025-07-12 14:16:32.002289 | orchestrator | "persistent": [ 2025-07-12 14:16:32.002301 | orchestrator | "kraken", 2025-07-12 14:16:32.002314 | orchestrator | "luminous", 2025-07-12 14:16:32.002326 | orchestrator | "mimic", 2025-07-12 14:16:32.002338 | orchestrator | "osdmap-prune", 2025-07-12 14:16:32.002351 | orchestrator | "nautilus", 2025-07-12 14:16:32.002363 | orchestrator | "octopus", 2025-07-12 14:16:32.002376 | orchestrator | "pacific", 2025-07-12 14:16:32.002415 | orchestrator | "elector-pinging", 2025-07-12 14:16:32.002429 | orchestrator | "quincy", 2025-07-12 14:16:32.002447 | orchestrator | "reef" 2025-07-12 14:16:32.002465 | orchestrator | ], 2025-07-12 14:16:32.002485 | orchestrator | "optional": [] 2025-07-12 14:16:32.002502 | orchestrator | }, 2025-07-12 14:16:32.002521 | orchestrator | "mons": [ 2025-07-12 14:16:32.002539 | orchestrator | { 2025-07-12 14:16:32.002557 | orchestrator | "rank": 0, 2025-07-12 14:16:32.002575 | orchestrator | "name": "testbed-node-0", 2025-07-12 14:16:32.002593 | orchestrator | "public_addrs": { 2025-07-12 14:16:32.002609 | orchestrator | "addrvec": [ 2025-07-12 14:16:32.002625 | orchestrator | { 2025-07-12 14:16:32.002642 | orchestrator | "type": "v2", 2025-07-12 14:16:32.002658 | orchestrator | "addr": "192.168.16.10:3300", 2025-07-12 14:16:32.002676 | orchestrator | "nonce": 0 2025-07-12 14:16:32.002693 | orchestrator | }, 2025-07-12 14:16:32.002710 | orchestrator | { 2025-07-12 14:16:32.002728 | orchestrator | "type": "v1", 2025-07-12 14:16:32.002784 | orchestrator | "addr": "192.168.16.10:6789", 2025-07-12 14:16:32.002803 | orchestrator | "nonce": 0 2025-07-12 14:16:32.002821 | orchestrator | } 2025-07-12 14:16:32.002838 | orchestrator | ] 2025-07-12 14:16:32.002856 | orchestrator | }, 2025-07-12 14:16:32.002873 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-07-12 14:16:32.002890 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-07-12 14:16:32.002907 | orchestrator | "priority": 0, 2025-07-12 14:16:32.002924 | orchestrator | "weight": 0, 2025-07-12 14:16:32.002942 | orchestrator | "crush_location": "{}" 2025-07-12 14:16:32.002960 | orchestrator | }, 2025-07-12 14:16:32.002978 | orchestrator | { 2025-07-12 14:16:32.002995 | orchestrator | "rank": 1, 2025-07-12 14:16:32.003013 | orchestrator | "name": "testbed-node-1", 2025-07-12 14:16:32.003030 | orchestrator | "public_addrs": { 2025-07-12 14:16:32.003048 | orchestrator | "addrvec": [ 2025-07-12 14:16:32.003067 | orchestrator | { 2025-07-12 14:16:32.003084 | orchestrator | "type": "v2", 2025-07-12 14:16:32.003103 | orchestrator | "addr": "192.168.16.11:3300", 2025-07-12 14:16:32.003120 | orchestrator | "nonce": 0 2025-07-12 14:16:32.003138 | orchestrator | }, 2025-07-12 14:16:32.003157 | orchestrator | { 2025-07-12 14:16:32.003175 | orchestrator | "type": "v1", 2025-07-12 14:16:32.003194 | orchestrator | "addr": "192.168.16.11:6789", 2025-07-12 14:16:32.003212 | orchestrator | "nonce": 0 2025-07-12 14:16:32.003230 | orchestrator | } 2025-07-12 14:16:32.003258 | orchestrator | ] 2025-07-12 14:16:32.003278 | orchestrator | }, 2025-07-12 14:16:32.003298 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-07-12 14:16:32.003317 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-07-12 14:16:32.003335 | orchestrator | "priority": 0, 2025-07-12 14:16:32.003353 | orchestrator | "weight": 0, 2025-07-12 14:16:32.003371 | orchestrator | "crush_location": "{}" 2025-07-12 14:16:32.003416 | orchestrator | }, 2025-07-12 14:16:32.003433 | orchestrator | { 2025-07-12 14:16:32.003444 | orchestrator | "rank": 2, 2025-07-12 14:16:32.003455 | orchestrator | "name": "testbed-node-2", 2025-07-12 14:16:32.003465 | orchestrator | "public_addrs": { 2025-07-12 14:16:32.003476 | orchestrator | "addrvec": [ 2025-07-12 14:16:32.003487 | orchestrator | { 2025-07-12 14:16:32.003498 | orchestrator | "type": "v2", 2025-07-12 14:16:32.003508 | orchestrator | "addr": "192.168.16.12:3300", 2025-07-12 14:16:32.003519 | orchestrator | "nonce": 0 2025-07-12 14:16:32.003530 | orchestrator | }, 2025-07-12 14:16:32.003540 | orchestrator | { 2025-07-12 14:16:32.003551 | orchestrator | "type": "v1", 2025-07-12 14:16:32.003562 | orchestrator | "addr": "192.168.16.12:6789", 2025-07-12 14:16:32.003573 | orchestrator | "nonce": 0 2025-07-12 14:16:32.003583 | orchestrator | } 2025-07-12 14:16:32.003594 | orchestrator | ] 2025-07-12 14:16:32.003604 | orchestrator | }, 2025-07-12 14:16:32.003615 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-07-12 14:16:32.003626 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-07-12 14:16:32.003637 | orchestrator | "priority": 0, 2025-07-12 14:16:32.003647 | orchestrator | "weight": 0, 2025-07-12 14:16:32.003658 | orchestrator | "crush_location": "{}" 2025-07-12 14:16:32.003669 | orchestrator | } 2025-07-12 14:16:32.003679 | orchestrator | ] 2025-07-12 14:16:32.003690 | orchestrator | } 2025-07-12 14:16:32.003701 | orchestrator | } 2025-07-12 14:16:32.003729 | orchestrator | 2025-07-12 14:16:32.003741 | orchestrator | + echo 2025-07-12 14:16:32.004000 | orchestrator | # Ceph free space status 2025-07-12 14:16:32.004024 | orchestrator | 2025-07-12 14:16:32.004036 | orchestrator | + echo '# Ceph free space status' 2025-07-12 14:16:32.004047 | orchestrator | + echo 2025-07-12 14:16:32.004058 | orchestrator | + ceph df 2025-07-12 14:16:32.613860 | orchestrator | --- RAW STORAGE --- 2025-07-12 14:16:32.613962 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-07-12 14:16:32.613989 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-07-12 14:16:32.614001 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-07-12 14:16:32.614012 | orchestrator | 2025-07-12 14:16:32.614109 | orchestrator | --- POOLS --- 2025-07-12 14:16:32.614122 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-07-12 14:16:32.614135 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-07-12 14:16:32.614145 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-07-12 14:16:32.614192 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-07-12 14:16:32.614205 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-07-12 14:16:32.614216 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-07-12 14:16:32.614227 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-07-12 14:16:32.614238 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-07-12 14:16:32.614249 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-07-12 14:16:32.614260 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2025-07-12 14:16:32.614271 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-07-12 14:16:32.614281 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-07-12 14:16:32.614292 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.93 35 GiB 2025-07-12 14:16:32.614303 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-07-12 14:16:32.614314 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-07-12 14:16:32.659791 | orchestrator | ++ semver 9.2.0 5.0.0 2025-07-12 14:16:32.707565 | orchestrator | + [[ 1 -eq -1 ]] 2025-07-12 14:16:32.707629 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-07-12 14:16:32.707644 | orchestrator | + osism apply facts 2025-07-12 14:16:34.616382 | orchestrator | 2025-07-12 14:16:34 | INFO  | Task b0bc247c-73ec-43aa-87a1-5865b01d031a (facts) was prepared for execution. 2025-07-12 14:16:34.616563 | orchestrator | 2025-07-12 14:16:34 | INFO  | It takes a moment until task b0bc247c-73ec-43aa-87a1-5865b01d031a (facts) has been started and output is visible here. 2025-07-12 14:16:47.895908 | orchestrator | 2025-07-12 14:16:47.896026 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-07-12 14:16:47.896041 | orchestrator | 2025-07-12 14:16:47.896054 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-12 14:16:47.896065 | orchestrator | Saturday 12 July 2025 14:16:38 +0000 (0:00:00.383) 0:00:00.383 ********* 2025-07-12 14:16:47.896076 | orchestrator | ok: [testbed-manager] 2025-07-12 14:16:47.896088 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:16:47.896100 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:16:47.896111 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:16:47.896122 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:16:47.896133 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:16:47.896144 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:16:47.896154 | orchestrator | 2025-07-12 14:16:47.896166 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-12 14:16:47.896194 | orchestrator | Saturday 12 July 2025 14:16:40 +0000 (0:00:01.489) 0:00:01.872 ********* 2025-07-12 14:16:47.896206 | orchestrator | skipping: [testbed-manager] 2025-07-12 14:16:47.896218 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:16:47.896229 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:16:47.896240 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:16:47.896251 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:16:47.896262 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:16:47.896272 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:16:47.896283 | orchestrator | 2025-07-12 14:16:47.896294 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-12 14:16:47.896305 | orchestrator | 2025-07-12 14:16:47.896316 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 14:16:47.896327 | orchestrator | Saturday 12 July 2025 14:16:41 +0000 (0:00:01.261) 0:00:03.134 ********* 2025-07-12 14:16:47.896338 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:16:47.896349 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:16:47.896359 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:16:47.896370 | orchestrator | ok: [testbed-manager] 2025-07-12 14:16:47.896411 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:16:47.896479 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:16:47.896492 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:16:47.896504 | orchestrator | 2025-07-12 14:16:47.896517 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-12 14:16:47.896529 | orchestrator | 2025-07-12 14:16:47.896541 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-12 14:16:47.896553 | orchestrator | Saturday 12 July 2025 14:16:46 +0000 (0:00:05.264) 0:00:08.399 ********* 2025-07-12 14:16:47.896566 | orchestrator | skipping: [testbed-manager] 2025-07-12 14:16:47.896578 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:16:47.896590 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:16:47.896601 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:16:47.896614 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:16:47.896626 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:16:47.896637 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:16:47.896649 | orchestrator | 2025-07-12 14:16:47.896661 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:16:47.896673 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 14:16:47.896686 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 14:16:47.896699 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 14:16:47.896710 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 14:16:47.896723 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 14:16:47.896735 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 14:16:47.896747 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 14:16:47.896759 | orchestrator | 2025-07-12 14:16:47.896770 | orchestrator | 2025-07-12 14:16:47.896783 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:16:47.896795 | orchestrator | Saturday 12 July 2025 14:16:47 +0000 (0:00:00.558) 0:00:08.958 ********* 2025-07-12 14:16:47.896807 | orchestrator | =============================================================================== 2025-07-12 14:16:47.896818 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.26s 2025-07-12 14:16:47.896829 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.49s 2025-07-12 14:16:47.896839 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.26s 2025-07-12 14:16:47.896850 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2025-07-12 14:16:48.180601 | orchestrator | + osism validate ceph-mons 2025-07-12 14:17:19.790404 | orchestrator | 2025-07-12 14:17:19.790548 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-07-12 14:17:19.790566 | orchestrator | 2025-07-12 14:17:19.790579 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-07-12 14:17:19.790591 | orchestrator | Saturday 12 July 2025 14:17:04 +0000 (0:00:00.432) 0:00:00.432 ********* 2025-07-12 14:17:19.790603 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 14:17:19.790614 | orchestrator | 2025-07-12 14:17:19.790625 | orchestrator | TASK [Create report output directory] ****************************************** 2025-07-12 14:17:19.790636 | orchestrator | Saturday 12 July 2025 14:17:05 +0000 (0:00:00.644) 0:00:01.076 ********* 2025-07-12 14:17:19.790666 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 14:17:19.790678 | orchestrator | 2025-07-12 14:17:19.790689 | orchestrator | TASK [Define report vars] ****************************************************** 2025-07-12 14:17:19.790700 | orchestrator | Saturday 12 July 2025 14:17:05 +0000 (0:00:00.840) 0:00:01.917 ********* 2025-07-12 14:17:19.790711 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:19.790723 | orchestrator | 2025-07-12 14:17:19.790735 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-07-12 14:17:19.790746 | orchestrator | Saturday 12 July 2025 14:17:06 +0000 (0:00:00.252) 0:00:02.169 ********* 2025-07-12 14:17:19.790756 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:19.790768 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:17:19.790778 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:17:19.790789 | orchestrator | 2025-07-12 14:17:19.790801 | orchestrator | TASK [Get container info] ****************************************************** 2025-07-12 14:17:19.790812 | orchestrator | Saturday 12 July 2025 14:17:06 +0000 (0:00:00.316) 0:00:02.486 ********* 2025-07-12 14:17:19.790823 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:17:19.790834 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:17:19.790844 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:19.790856 | orchestrator | 2025-07-12 14:17:19.790867 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-07-12 14:17:19.790878 | orchestrator | Saturday 12 July 2025 14:17:07 +0000 (0:00:00.984) 0:00:03.471 ********* 2025-07-12 14:17:19.790889 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:19.790901 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:17:19.790912 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:17:19.790922 | orchestrator | 2025-07-12 14:17:19.790933 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-07-12 14:17:19.790944 | orchestrator | Saturday 12 July 2025 14:17:07 +0000 (0:00:00.273) 0:00:03.744 ********* 2025-07-12 14:17:19.790955 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:19.790966 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:17:19.790977 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:17:19.790988 | orchestrator | 2025-07-12 14:17:19.790999 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 14:17:19.791010 | orchestrator | Saturday 12 July 2025 14:17:08 +0000 (0:00:00.490) 0:00:04.235 ********* 2025-07-12 14:17:19.791021 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:19.791032 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:17:19.791043 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:17:19.791054 | orchestrator | 2025-07-12 14:17:19.791065 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-07-12 14:17:19.791076 | orchestrator | Saturday 12 July 2025 14:17:08 +0000 (0:00:00.305) 0:00:04.540 ********* 2025-07-12 14:17:19.791087 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:19.791098 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:17:19.791109 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:17:19.791120 | orchestrator | 2025-07-12 14:17:19.791131 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-07-12 14:17:19.791142 | orchestrator | Saturday 12 July 2025 14:17:08 +0000 (0:00:00.288) 0:00:04.829 ********* 2025-07-12 14:17:19.791153 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:19.791164 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:17:19.791175 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:17:19.791186 | orchestrator | 2025-07-12 14:17:19.791196 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-12 14:17:19.791208 | orchestrator | Saturday 12 July 2025 14:17:09 +0000 (0:00:00.315) 0:00:05.145 ********* 2025-07-12 14:17:19.791218 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:19.791229 | orchestrator | 2025-07-12 14:17:19.791240 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-12 14:17:19.791251 | orchestrator | Saturday 12 July 2025 14:17:09 +0000 (0:00:00.241) 0:00:05.386 ********* 2025-07-12 14:17:19.791262 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:19.791280 | orchestrator | 2025-07-12 14:17:19.791292 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-12 14:17:19.791302 | orchestrator | Saturday 12 July 2025 14:17:10 +0000 (0:00:00.634) 0:00:06.021 ********* 2025-07-12 14:17:19.791313 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:19.791324 | orchestrator | 2025-07-12 14:17:19.791335 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:17:19.791396 | orchestrator | Saturday 12 July 2025 14:17:10 +0000 (0:00:00.230) 0:00:06.251 ********* 2025-07-12 14:17:19.791408 | orchestrator | 2025-07-12 14:17:19.791419 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:17:19.791430 | orchestrator | Saturday 12 July 2025 14:17:10 +0000 (0:00:00.068) 0:00:06.320 ********* 2025-07-12 14:17:19.791441 | orchestrator | 2025-07-12 14:17:19.791452 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:17:19.791463 | orchestrator | Saturday 12 July 2025 14:17:10 +0000 (0:00:00.068) 0:00:06.389 ********* 2025-07-12 14:17:19.791490 | orchestrator | 2025-07-12 14:17:19.791502 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-12 14:17:19.791513 | orchestrator | Saturday 12 July 2025 14:17:10 +0000 (0:00:00.071) 0:00:06.460 ********* 2025-07-12 14:17:19.791523 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:19.791534 | orchestrator | 2025-07-12 14:17:19.791545 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-07-12 14:17:19.791556 | orchestrator | Saturday 12 July 2025 14:17:10 +0000 (0:00:00.270) 0:00:06.731 ********* 2025-07-12 14:17:19.791567 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:19.791578 | orchestrator | 2025-07-12 14:17:19.791606 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-07-12 14:17:19.791628 | orchestrator | Saturday 12 July 2025 14:17:10 +0000 (0:00:00.226) 0:00:06.957 ********* 2025-07-12 14:17:19.791640 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:19.791651 | orchestrator | 2025-07-12 14:17:19.791662 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-07-12 14:17:19.791673 | orchestrator | Saturday 12 July 2025 14:17:11 +0000 (0:00:00.119) 0:00:07.076 ********* 2025-07-12 14:17:19.791684 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:17:19.791695 | orchestrator | 2025-07-12 14:17:19.791706 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-07-12 14:17:19.791717 | orchestrator | Saturday 12 July 2025 14:17:12 +0000 (0:00:01.663) 0:00:08.739 ********* 2025-07-12 14:17:19.791728 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:19.791739 | orchestrator | 2025-07-12 14:17:19.791749 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-07-12 14:17:19.791760 | orchestrator | Saturday 12 July 2025 14:17:13 +0000 (0:00:00.314) 0:00:09.054 ********* 2025-07-12 14:17:19.791771 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:19.791782 | orchestrator | 2025-07-12 14:17:19.791793 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-07-12 14:17:19.791803 | orchestrator | Saturday 12 July 2025 14:17:13 +0000 (0:00:00.123) 0:00:09.178 ********* 2025-07-12 14:17:19.791814 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:19.791825 | orchestrator | 2025-07-12 14:17:19.791840 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-07-12 14:17:19.791851 | orchestrator | Saturday 12 July 2025 14:17:13 +0000 (0:00:00.480) 0:00:09.659 ********* 2025-07-12 14:17:19.791862 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:19.791873 | orchestrator | 2025-07-12 14:17:19.791884 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-07-12 14:17:19.791894 | orchestrator | Saturday 12 July 2025 14:17:14 +0000 (0:00:00.327) 0:00:09.986 ********* 2025-07-12 14:17:19.791905 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:19.791916 | orchestrator | 2025-07-12 14:17:19.791927 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-07-12 14:17:19.791938 | orchestrator | Saturday 12 July 2025 14:17:14 +0000 (0:00:00.150) 0:00:10.136 ********* 2025-07-12 14:17:19.791956 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:19.791967 | orchestrator | 2025-07-12 14:17:19.791978 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-07-12 14:17:19.791989 | orchestrator | Saturday 12 July 2025 14:17:14 +0000 (0:00:00.136) 0:00:10.272 ********* 2025-07-12 14:17:19.792000 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:19.792010 | orchestrator | 2025-07-12 14:17:19.792021 | orchestrator | TASK [Gather status data] ****************************************************** 2025-07-12 14:17:19.792032 | orchestrator | Saturday 12 July 2025 14:17:14 +0000 (0:00:00.122) 0:00:10.394 ********* 2025-07-12 14:17:19.792043 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:17:19.792054 | orchestrator | 2025-07-12 14:17:19.792065 | orchestrator | TASK [Set health test data] **************************************************** 2025-07-12 14:17:19.792075 | orchestrator | Saturday 12 July 2025 14:17:15 +0000 (0:00:01.279) 0:00:11.674 ********* 2025-07-12 14:17:19.792086 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:19.792097 | orchestrator | 2025-07-12 14:17:19.792108 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-07-12 14:17:19.792119 | orchestrator | Saturday 12 July 2025 14:17:16 +0000 (0:00:00.309) 0:00:11.984 ********* 2025-07-12 14:17:19.792130 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:19.792140 | orchestrator | 2025-07-12 14:17:19.792151 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-07-12 14:17:19.792162 | orchestrator | Saturday 12 July 2025 14:17:16 +0000 (0:00:00.145) 0:00:12.129 ********* 2025-07-12 14:17:19.792173 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:19.792184 | orchestrator | 2025-07-12 14:17:19.792195 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-07-12 14:17:19.792206 | orchestrator | Saturday 12 July 2025 14:17:16 +0000 (0:00:00.135) 0:00:12.264 ********* 2025-07-12 14:17:19.792216 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:19.792227 | orchestrator | 2025-07-12 14:17:19.792238 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-07-12 14:17:19.792249 | orchestrator | Saturday 12 July 2025 14:17:16 +0000 (0:00:00.123) 0:00:12.388 ********* 2025-07-12 14:17:19.792260 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:19.792270 | orchestrator | 2025-07-12 14:17:19.792281 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-07-12 14:17:19.792292 | orchestrator | Saturday 12 July 2025 14:17:16 +0000 (0:00:00.121) 0:00:12.510 ********* 2025-07-12 14:17:19.792303 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 14:17:19.792314 | orchestrator | 2025-07-12 14:17:19.792325 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-07-12 14:17:19.792336 | orchestrator | Saturday 12 July 2025 14:17:17 +0000 (0:00:00.661) 0:00:13.171 ********* 2025-07-12 14:17:19.792347 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:19.792357 | orchestrator | 2025-07-12 14:17:19.792368 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-12 14:17:19.792379 | orchestrator | Saturday 12 July 2025 14:17:17 +0000 (0:00:00.259) 0:00:13.430 ********* 2025-07-12 14:17:19.792390 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 14:17:19.792401 | orchestrator | 2025-07-12 14:17:19.792412 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-12 14:17:19.792423 | orchestrator | Saturday 12 July 2025 14:17:19 +0000 (0:00:01.575) 0:00:15.006 ********* 2025-07-12 14:17:19.792433 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 14:17:19.792444 | orchestrator | 2025-07-12 14:17:19.792455 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-12 14:17:19.792481 | orchestrator | Saturday 12 July 2025 14:17:19 +0000 (0:00:00.258) 0:00:15.265 ********* 2025-07-12 14:17:19.792492 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 14:17:19.792503 | orchestrator | 2025-07-12 14:17:19.792521 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:17:21.911311 | orchestrator | Saturday 12 July 2025 14:17:19 +0000 (0:00:00.252) 0:00:15.518 ********* 2025-07-12 14:17:21.911422 | orchestrator | 2025-07-12 14:17:21.911437 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:17:21.911449 | orchestrator | Saturday 12 July 2025 14:17:19 +0000 (0:00:00.067) 0:00:15.585 ********* 2025-07-12 14:17:21.911460 | orchestrator | 2025-07-12 14:17:21.911516 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:17:21.911528 | orchestrator | Saturday 12 July 2025 14:17:19 +0000 (0:00:00.071) 0:00:15.657 ********* 2025-07-12 14:17:21.911540 | orchestrator | 2025-07-12 14:17:21.911551 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-07-12 14:17:21.911562 | orchestrator | Saturday 12 July 2025 14:17:19 +0000 (0:00:00.085) 0:00:15.742 ********* 2025-07-12 14:17:21.911574 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 14:17:21.911585 | orchestrator | 2025-07-12 14:17:21.911596 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-12 14:17:21.911607 | orchestrator | Saturday 12 July 2025 14:17:21 +0000 (0:00:01.278) 0:00:17.021 ********* 2025-07-12 14:17:21.911618 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-07-12 14:17:21.911630 | orchestrator |  "msg": [ 2025-07-12 14:17:21.911643 | orchestrator |  "Validator run completed.", 2025-07-12 14:17:21.911676 | orchestrator |  "You can find the report file here:", 2025-07-12 14:17:21.911689 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-07-12T14:17:04+00:00-report.json", 2025-07-12 14:17:21.911701 | orchestrator |  "on the following host:", 2025-07-12 14:17:21.911712 | orchestrator |  "testbed-manager" 2025-07-12 14:17:21.911723 | orchestrator |  ] 2025-07-12 14:17:21.911735 | orchestrator | } 2025-07-12 14:17:21.911747 | orchestrator | 2025-07-12 14:17:21.911758 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:17:21.911770 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-07-12 14:17:21.911783 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 14:17:21.911795 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 14:17:21.911806 | orchestrator | 2025-07-12 14:17:21.911817 | orchestrator | 2025-07-12 14:17:21.911828 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:17:21.911839 | orchestrator | Saturday 12 July 2025 14:17:21 +0000 (0:00:00.422) 0:00:17.443 ********* 2025-07-12 14:17:21.911855 | orchestrator | =============================================================================== 2025-07-12 14:17:21.911866 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.66s 2025-07-12 14:17:21.911878 | orchestrator | Aggregate test results step one ----------------------------------------- 1.58s 2025-07-12 14:17:21.911889 | orchestrator | Gather status data ------------------------------------------------------ 1.28s 2025-07-12 14:17:21.911900 | orchestrator | Write report file ------------------------------------------------------- 1.28s 2025-07-12 14:17:21.911911 | orchestrator | Get container info ------------------------------------------------------ 0.98s 2025-07-12 14:17:21.911922 | orchestrator | Create report output directory ------------------------------------------ 0.84s 2025-07-12 14:17:21.911933 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.66s 2025-07-12 14:17:21.911944 | orchestrator | Get timestamp for report file ------------------------------------------- 0.64s 2025-07-12 14:17:21.911955 | orchestrator | Aggregate test results step two ----------------------------------------- 0.63s 2025-07-12 14:17:21.911966 | orchestrator | Set test result to passed if container is existing ---------------------- 0.49s 2025-07-12 14:17:21.912003 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.48s 2025-07-12 14:17:21.912014 | orchestrator | Print report file information ------------------------------------------- 0.42s 2025-07-12 14:17:21.912026 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.33s 2025-07-12 14:17:21.912037 | orchestrator | Prepare test data for container existance test -------------------------- 0.32s 2025-07-12 14:17:21.912048 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.32s 2025-07-12 14:17:21.912059 | orchestrator | Set quorum test data ---------------------------------------------------- 0.31s 2025-07-12 14:17:21.912070 | orchestrator | Set health test data ---------------------------------------------------- 0.31s 2025-07-12 14:17:21.912081 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2025-07-12 14:17:21.912092 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.29s 2025-07-12 14:17:21.912103 | orchestrator | Set test result to failed if container is missing ----------------------- 0.27s 2025-07-12 14:17:22.197299 | orchestrator | + osism validate ceph-mgrs 2025-07-12 14:17:52.719672 | orchestrator | 2025-07-12 14:17:52.719793 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-07-12 14:17:52.719811 | orchestrator | 2025-07-12 14:17:52.719824 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-07-12 14:17:52.719837 | orchestrator | Saturday 12 July 2025 14:17:38 +0000 (0:00:00.439) 0:00:00.439 ********* 2025-07-12 14:17:52.719848 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 14:17:52.719860 | orchestrator | 2025-07-12 14:17:52.719871 | orchestrator | TASK [Create report output directory] ****************************************** 2025-07-12 14:17:52.719882 | orchestrator | Saturday 12 July 2025 14:17:39 +0000 (0:00:00.628) 0:00:01.068 ********* 2025-07-12 14:17:52.719893 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 14:17:52.719904 | orchestrator | 2025-07-12 14:17:52.719915 | orchestrator | TASK [Define report vars] ****************************************************** 2025-07-12 14:17:52.719926 | orchestrator | Saturday 12 July 2025 14:17:39 +0000 (0:00:00.862) 0:00:01.930 ********* 2025-07-12 14:17:52.719937 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:52.719949 | orchestrator | 2025-07-12 14:17:52.719961 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-07-12 14:17:52.719971 | orchestrator | Saturday 12 July 2025 14:17:40 +0000 (0:00:00.259) 0:00:02.189 ********* 2025-07-12 14:17:52.719982 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:52.719994 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:17:52.720005 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:17:52.720016 | orchestrator | 2025-07-12 14:17:52.720027 | orchestrator | TASK [Get container info] ****************************************************** 2025-07-12 14:17:52.720038 | orchestrator | Saturday 12 July 2025 14:17:40 +0000 (0:00:00.298) 0:00:02.488 ********* 2025-07-12 14:17:52.720049 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:17:52.720060 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:52.720071 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:17:52.720082 | orchestrator | 2025-07-12 14:17:52.720093 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-07-12 14:17:52.720104 | orchestrator | Saturday 12 July 2025 14:17:41 +0000 (0:00:00.952) 0:00:03.441 ********* 2025-07-12 14:17:52.720115 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:52.720126 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:17:52.720138 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:17:52.720148 | orchestrator | 2025-07-12 14:17:52.720160 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-07-12 14:17:52.720173 | orchestrator | Saturday 12 July 2025 14:17:41 +0000 (0:00:00.266) 0:00:03.708 ********* 2025-07-12 14:17:52.720185 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:52.720198 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:17:52.720210 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:17:52.720223 | orchestrator | 2025-07-12 14:17:52.720235 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 14:17:52.720272 | orchestrator | Saturday 12 July 2025 14:17:42 +0000 (0:00:00.451) 0:00:04.159 ********* 2025-07-12 14:17:52.720285 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:52.720297 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:17:52.720309 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:17:52.720321 | orchestrator | 2025-07-12 14:17:52.720333 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-07-12 14:17:52.720346 | orchestrator | Saturday 12 July 2025 14:17:42 +0000 (0:00:00.323) 0:00:04.483 ********* 2025-07-12 14:17:52.720358 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:52.720371 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:17:52.720383 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:17:52.720395 | orchestrator | 2025-07-12 14:17:52.720407 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-07-12 14:17:52.720419 | orchestrator | Saturday 12 July 2025 14:17:42 +0000 (0:00:00.310) 0:00:04.793 ********* 2025-07-12 14:17:52.720432 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:52.720444 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:17:52.720456 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:17:52.720468 | orchestrator | 2025-07-12 14:17:52.720497 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-12 14:17:52.720510 | orchestrator | Saturday 12 July 2025 14:17:43 +0000 (0:00:00.290) 0:00:05.083 ********* 2025-07-12 14:17:52.720545 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:52.720556 | orchestrator | 2025-07-12 14:17:52.720567 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-12 14:17:52.720578 | orchestrator | Saturday 12 July 2025 14:17:43 +0000 (0:00:00.231) 0:00:05.314 ********* 2025-07-12 14:17:52.720589 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:52.720600 | orchestrator | 2025-07-12 14:17:52.720611 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-12 14:17:52.720622 | orchestrator | Saturday 12 July 2025 14:17:43 +0000 (0:00:00.688) 0:00:06.003 ********* 2025-07-12 14:17:52.720633 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:52.720644 | orchestrator | 2025-07-12 14:17:52.720655 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:17:52.720666 | orchestrator | Saturday 12 July 2025 14:17:44 +0000 (0:00:00.238) 0:00:06.242 ********* 2025-07-12 14:17:52.720676 | orchestrator | 2025-07-12 14:17:52.720687 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:17:52.720698 | orchestrator | Saturday 12 July 2025 14:17:44 +0000 (0:00:00.078) 0:00:06.321 ********* 2025-07-12 14:17:52.720709 | orchestrator | 2025-07-12 14:17:52.720720 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:17:52.720730 | orchestrator | Saturday 12 July 2025 14:17:44 +0000 (0:00:00.067) 0:00:06.389 ********* 2025-07-12 14:17:52.720741 | orchestrator | 2025-07-12 14:17:52.720752 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-12 14:17:52.720762 | orchestrator | Saturday 12 July 2025 14:17:44 +0000 (0:00:00.111) 0:00:06.500 ********* 2025-07-12 14:17:52.720773 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:52.720784 | orchestrator | 2025-07-12 14:17:52.720795 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-07-12 14:17:52.720806 | orchestrator | Saturday 12 July 2025 14:17:44 +0000 (0:00:00.258) 0:00:06.759 ********* 2025-07-12 14:17:52.720817 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:52.720827 | orchestrator | 2025-07-12 14:17:52.720857 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-07-12 14:17:52.720868 | orchestrator | Saturday 12 July 2025 14:17:44 +0000 (0:00:00.238) 0:00:06.997 ********* 2025-07-12 14:17:52.720880 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:52.720890 | orchestrator | 2025-07-12 14:17:52.720901 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-07-12 14:17:52.720912 | orchestrator | Saturday 12 July 2025 14:17:45 +0000 (0:00:00.129) 0:00:07.126 ********* 2025-07-12 14:17:52.720932 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:17:52.720943 | orchestrator | 2025-07-12 14:17:52.720954 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-07-12 14:17:52.720965 | orchestrator | Saturday 12 July 2025 14:17:47 +0000 (0:00:01.945) 0:00:09.072 ********* 2025-07-12 14:17:52.720976 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:52.720987 | orchestrator | 2025-07-12 14:17:52.720997 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-07-12 14:17:52.721008 | orchestrator | Saturday 12 July 2025 14:17:47 +0000 (0:00:00.237) 0:00:09.310 ********* 2025-07-12 14:17:52.721019 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:52.721030 | orchestrator | 2025-07-12 14:17:52.721041 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-07-12 14:17:52.721051 | orchestrator | Saturday 12 July 2025 14:17:47 +0000 (0:00:00.276) 0:00:09.586 ********* 2025-07-12 14:17:52.721062 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:52.721073 | orchestrator | 2025-07-12 14:17:52.721083 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-07-12 14:17:52.721094 | orchestrator | Saturday 12 July 2025 14:17:47 +0000 (0:00:00.329) 0:00:09.916 ********* 2025-07-12 14:17:52.721105 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:52.721116 | orchestrator | 2025-07-12 14:17:52.721127 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-07-12 14:17:52.721137 | orchestrator | Saturday 12 July 2025 14:17:47 +0000 (0:00:00.142) 0:00:10.059 ********* 2025-07-12 14:17:52.721148 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 14:17:52.721159 | orchestrator | 2025-07-12 14:17:52.721175 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-07-12 14:17:52.721187 | orchestrator | Saturday 12 July 2025 14:17:48 +0000 (0:00:00.254) 0:00:10.313 ********* 2025-07-12 14:17:52.721197 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:52.721208 | orchestrator | 2025-07-12 14:17:52.721219 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-12 14:17:52.721230 | orchestrator | Saturday 12 July 2025 14:17:48 +0000 (0:00:00.255) 0:00:10.569 ********* 2025-07-12 14:17:52.721240 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 14:17:52.721251 | orchestrator | 2025-07-12 14:17:52.721262 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-12 14:17:52.721273 | orchestrator | Saturday 12 July 2025 14:17:49 +0000 (0:00:01.182) 0:00:11.751 ********* 2025-07-12 14:17:52.721284 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 14:17:52.721295 | orchestrator | 2025-07-12 14:17:52.721305 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-12 14:17:52.721316 | orchestrator | Saturday 12 July 2025 14:17:49 +0000 (0:00:00.243) 0:00:11.995 ********* 2025-07-12 14:17:52.721327 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 14:17:52.721337 | orchestrator | 2025-07-12 14:17:52.721348 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:17:52.721359 | orchestrator | Saturday 12 July 2025 14:17:50 +0000 (0:00:00.248) 0:00:12.244 ********* 2025-07-12 14:17:52.721370 | orchestrator | 2025-07-12 14:17:52.721381 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:17:52.721391 | orchestrator | Saturday 12 July 2025 14:17:50 +0000 (0:00:00.067) 0:00:12.312 ********* 2025-07-12 14:17:52.721402 | orchestrator | 2025-07-12 14:17:52.721413 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:17:52.721424 | orchestrator | Saturday 12 July 2025 14:17:50 +0000 (0:00:00.072) 0:00:12.384 ********* 2025-07-12 14:17:52.721434 | orchestrator | 2025-07-12 14:17:52.721445 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-07-12 14:17:52.721456 | orchestrator | Saturday 12 July 2025 14:17:50 +0000 (0:00:00.071) 0:00:12.456 ********* 2025-07-12 14:17:52.721473 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 14:17:52.721484 | orchestrator | 2025-07-12 14:17:52.721495 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-12 14:17:52.721506 | orchestrator | Saturday 12 July 2025 14:17:51 +0000 (0:00:01.499) 0:00:13.955 ********* 2025-07-12 14:17:52.721535 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-07-12 14:17:52.721547 | orchestrator |  "msg": [ 2025-07-12 14:17:52.721559 | orchestrator |  "Validator run completed.", 2025-07-12 14:17:52.721570 | orchestrator |  "You can find the report file here:", 2025-07-12 14:17:52.721582 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-07-12T14:17:38+00:00-report.json", 2025-07-12 14:17:52.721593 | orchestrator |  "on the following host:", 2025-07-12 14:17:52.721604 | orchestrator |  "testbed-manager" 2025-07-12 14:17:52.721615 | orchestrator |  ] 2025-07-12 14:17:52.721626 | orchestrator | } 2025-07-12 14:17:52.721637 | orchestrator | 2025-07-12 14:17:52.721648 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:17:52.721660 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-12 14:17:52.721672 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 14:17:52.721691 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 14:17:53.001654 | orchestrator | 2025-07-12 14:17:53.001743 | orchestrator | 2025-07-12 14:17:53.001757 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:17:53.001769 | orchestrator | Saturday 12 July 2025 14:17:52 +0000 (0:00:00.803) 0:00:14.758 ********* 2025-07-12 14:17:53.001780 | orchestrator | =============================================================================== 2025-07-12 14:17:53.001791 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.95s 2025-07-12 14:17:53.001802 | orchestrator | Write report file ------------------------------------------------------- 1.50s 2025-07-12 14:17:53.001813 | orchestrator | Aggregate test results step one ----------------------------------------- 1.18s 2025-07-12 14:17:53.001824 | orchestrator | Get container info ------------------------------------------------------ 0.95s 2025-07-12 14:17:53.001835 | orchestrator | Create report output directory ------------------------------------------ 0.86s 2025-07-12 14:17:53.001846 | orchestrator | Print report file information ------------------------------------------- 0.80s 2025-07-12 14:17:53.001876 | orchestrator | Aggregate test results step two ----------------------------------------- 0.69s 2025-07-12 14:17:53.001888 | orchestrator | Get timestamp for report file ------------------------------------------- 0.63s 2025-07-12 14:17:53.001899 | orchestrator | Set test result to passed if container is existing ---------------------- 0.45s 2025-07-12 14:17:53.001910 | orchestrator | Fail test if mgr modules are disabled that should be enabled ------------ 0.33s 2025-07-12 14:17:53.001921 | orchestrator | Prepare test data ------------------------------------------------------- 0.32s 2025-07-12 14:17:53.001932 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.31s 2025-07-12 14:17:53.001943 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2025-07-12 14:17:53.001954 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.29s 2025-07-12 14:17:53.001981 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.28s 2025-07-12 14:17:53.001993 | orchestrator | Set test result to failed if container is missing ----------------------- 0.27s 2025-07-12 14:17:53.002004 | orchestrator | Define report vars ------------------------------------------------------ 0.26s 2025-07-12 14:17:53.002063 | orchestrator | Print report file information ------------------------------------------- 0.26s 2025-07-12 14:17:53.002078 | orchestrator | Flush handlers ---------------------------------------------------------- 0.26s 2025-07-12 14:17:53.002111 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.26s 2025-07-12 14:17:53.308475 | orchestrator | + osism validate ceph-osds 2025-07-12 14:18:14.164763 | orchestrator | 2025-07-12 14:18:14.164881 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-07-12 14:18:14.164898 | orchestrator | 2025-07-12 14:18:14.164910 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-07-12 14:18:14.164922 | orchestrator | Saturday 12 July 2025 14:18:09 +0000 (0:00:00.480) 0:00:00.480 ********* 2025-07-12 14:18:14.164934 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 14:18:14.164946 | orchestrator | 2025-07-12 14:18:14.164957 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 14:18:14.164969 | orchestrator | Saturday 12 July 2025 14:18:10 +0000 (0:00:00.679) 0:00:01.160 ********* 2025-07-12 14:18:14.164980 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 14:18:14.164991 | orchestrator | 2025-07-12 14:18:14.165002 | orchestrator | TASK [Create report output directory] ****************************************** 2025-07-12 14:18:14.165013 | orchestrator | Saturday 12 July 2025 14:18:10 +0000 (0:00:00.242) 0:00:01.402 ********* 2025-07-12 14:18:14.165024 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 14:18:14.165035 | orchestrator | 2025-07-12 14:18:14.165046 | orchestrator | TASK [Define report vars] ****************************************************** 2025-07-12 14:18:14.165057 | orchestrator | Saturday 12 July 2025 14:18:11 +0000 (0:00:01.008) 0:00:02.410 ********* 2025-07-12 14:18:14.165068 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:14.165080 | orchestrator | 2025-07-12 14:18:14.165091 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-07-12 14:18:14.165102 | orchestrator | Saturday 12 July 2025 14:18:11 +0000 (0:00:00.115) 0:00:02.526 ********* 2025-07-12 14:18:14.165113 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:18:14.165124 | orchestrator | 2025-07-12 14:18:14.165135 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-07-12 14:18:14.165146 | orchestrator | Saturday 12 July 2025 14:18:12 +0000 (0:00:00.145) 0:00:02.672 ********* 2025-07-12 14:18:14.165157 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:18:14.165169 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:18:14.165179 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:18:14.165191 | orchestrator | 2025-07-12 14:18:14.165201 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-07-12 14:18:14.165212 | orchestrator | Saturday 12 July 2025 14:18:12 +0000 (0:00:00.303) 0:00:02.975 ********* 2025-07-12 14:18:14.165223 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:14.165235 | orchestrator | 2025-07-12 14:18:14.165245 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-07-12 14:18:14.165256 | orchestrator | Saturday 12 July 2025 14:18:12 +0000 (0:00:00.150) 0:00:03.125 ********* 2025-07-12 14:18:14.165267 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:14.165279 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:18:14.165291 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:18:14.165304 | orchestrator | 2025-07-12 14:18:14.165316 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-07-12 14:18:14.165329 | orchestrator | Saturday 12 July 2025 14:18:12 +0000 (0:00:00.341) 0:00:03.467 ********* 2025-07-12 14:18:14.165341 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:14.165353 | orchestrator | 2025-07-12 14:18:14.165365 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 14:18:14.165377 | orchestrator | Saturday 12 July 2025 14:18:13 +0000 (0:00:00.522) 0:00:03.989 ********* 2025-07-12 14:18:14.165390 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:14.165401 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:18:14.165412 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:18:14.165423 | orchestrator | 2025-07-12 14:18:14.165434 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-07-12 14:18:14.165470 | orchestrator | Saturday 12 July 2025 14:18:13 +0000 (0:00:00.509) 0:00:04.499 ********* 2025-07-12 14:18:14.165484 | orchestrator | skipping: [testbed-node-3] => (item={'id': '47e3dbf8978c1021b8522f1df6a21f511793b6d212221ced442b38336ab1cd41', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-07-12 14:18:14.165498 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c93c63f74c35e046e0e579ed0cd48fdf9b0c59ff9b1addf90e03bb7bedecd28c', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-07-12 14:18:14.165510 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd2f161b7a353c6e0ae7a53e2d07923f2a1f05d1a1e149421845d63343ca0ffb4', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-07-12 14:18:14.165522 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2538cb188ea4de90f05f47e6516476bc2aba8bbd8221f727d2fc6024092b4fb4', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-07-12 14:18:14.165534 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5588bc05bed6d16310ec0835afabaec3e6b195c02da94c629ae664d3d25d16e8', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-07-12 14:18:14.165610 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b072d38b2657afc7db6c94593241a033598bd153d96cc34f413965a0aebc336f', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-07-12 14:18:14.165637 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8baee7d5fb6e1cc4a86e416d139e99f1250613dd442da8a4a34f2f1aae6dc073', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2025-07-12 14:18:14.165649 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'bead9f3257a578c4d68c572db313f1bfdec10e7dde0931b72d66262ebff4da08', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-07-12 14:18:14.165661 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e452cec8d1e6f98dead39c1b050e77cd61001eb8e76bf2157bcd8489d82587e3', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-07-12 14:18:14.165673 | orchestrator | skipping: [testbed-node-3] => (item={'id': '648147c712aef687e87e14b00caf2f6a1a7357c421404d557d35b7c1e5711108', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-07-12 14:18:14.165685 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'dcd422de49dc8e6fa6a95f00d5af97c8809a322f678b12b384c2473b0b3144ce', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2025-07-12 14:18:14.165697 | orchestrator | skipping: [testbed-node-3] => (item={'id': '63a889de6991aed18a6bf8b7068bd99a668e8cee58e41667abe6ff230856bc37', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2025-07-12 14:18:14.165710 | orchestrator | ok: [testbed-node-3] => (item={'id': '73f25a361a7a84294f53b6e3b9ee7e46d2202b13fe2044248fb76f0847259861', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-07-12 14:18:14.165732 | orchestrator | ok: [testbed-node-3] => (item={'id': 'cb8bae6b2a3c1ae3523eb44c89642651b0ea199eca020da5b4b68837b2f83c8d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-07-12 14:18:14.165744 | orchestrator | skipping: [testbed-node-3] => (item={'id': '596dbaf91d132f3e9353d419a349b0b29befc022977f65556debfa9394f6b0c4', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-07-12 14:18:14.165755 | orchestrator | skipping: [testbed-node-3] => (item={'id': '98d43e5d003e23675a6793d9d9bda2563bd3dfb8b068e567e0f9ecb6d6877e58', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-07-12 14:18:14.165767 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b161f620bcbcb6af9bf6a8cded518a3b6921b97d0152461bb9f01b74b7d7e337', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-07-12 14:18:14.165778 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7b6cad3573d525621c4610fe36744784718d7919085bfb60f3a4050dc6a84a5f', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2025-07-12 14:18:14.165795 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'bbd1039240c4a694db46bd636160820a42cf0ed6dfb560835e321f3938d98b36', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-07-12 14:18:14.165807 | orchestrator | skipping: [testbed-node-3] => (item={'id': '96ffd002c92f44bb418ca681d5490db0bded2522b1093cfa9d91409c3fd5fd50', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-07-12 14:18:14.165825 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3b1b3a525e3d99f6cfe976b47633c8e9d822368970a38a21092994a915124ea9', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-07-12 14:18:14.311305 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0281fff7d7884ef31334ec2034a940d601838de99b84bab913b15c7da7a7aaea', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-07-12 14:18:14.311465 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b85bd68d2edf199c6f35c9ffd4ff98411229c5c5c8f096dc4ecd4a6a7475482d', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-07-12 14:18:14.311482 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd01d465f079abc5404e98c28229602484ce20a7632bc3a7657bcc59aae20e4e5', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-07-12 14:18:14.311494 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c18b0bd46fb9aad46ef492b2df590252ccac6cb8dcf836a13587968ede00c2a8', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-07-12 14:18:14.311505 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'cbb6ad81e57372ced4b45f6fa2d311c3a7e23526fcf2e0fa9f577d2a3ccc8228', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-07-12 14:18:14.311519 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5c8a6e019a91fe26f7cb0f3676495489dfe9ed9136f73140336dccc4c1303008', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2025-07-12 14:18:14.311634 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a31903e32a4d65a44a2b3db6b32ddf8ee255c61213a8e533d7fae0c2926f494a', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-07-12 14:18:14.311651 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0d68be23c390e7d9265febf2b265221c6ed4642cdf4bb9bfe216c58e4f795b60', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-07-12 14:18:14.311662 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bc90ab2b4428a40632f796ab3172223ea7ae5d82d1c48223c61c690d0b0a2321', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-07-12 14:18:14.311675 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e12ce9372980a77af2d9bbcd4f9d626f4da533c2b59f02a3df75b5ba026c44b9', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2025-07-12 14:18:14.311686 | orchestrator | skipping: [testbed-node-4] => (item={'id': '837741e3b265b9f9bcda976c2f72cee8df6ae1bf6d16b18e18ab8cffb46d466e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 24 minutes'})  2025-07-12 14:18:14.311699 | orchestrator | ok: [testbed-node-4] => (item={'id': '8221a7f8ac8b500fbd274775f8c176175e7cd2072635cb5cb93ac2dbfd7b6c4c', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-07-12 14:18:14.311726 | orchestrator | ok: [testbed-node-4] => (item={'id': 'e2632438e9291e9d8ae37a58ac3fda8edeca540ccb917006476ff64def94c7fb', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-07-12 14:18:14.311739 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1aac93b4b0379f26dac444c1a1166bbd956e21add9969983c45f0022fb881af7', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-07-12 14:18:14.311768 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8b9c2c7bb69f39009af7f5939970d5292c4bc055118275b9f29af4d215a4e6a3', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-07-12 14:18:14.311781 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'cb6a2f6758acb9538f99644a8cbe44c5eaa94ba17a4deda702a797d082478583', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-07-12 14:18:14.311793 | orchestrator | skipping: [testbed-node-4] => (item={'id': '94f8d4387b10359a8cf25c52c199caa3d3e3a73754a9399038e22bbc87fd3d6a', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2025-07-12 14:18:14.311804 | orchestrator | skipping: [testbed-node-4] => (item={'id': '021fc1e79d0cf7a4bcea2b158dfcd28da33510a7ecccede86cb3314624dd0328', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-07-12 14:18:14.311816 | orchestrator | skipping: [testbed-node-4] => (item={'id': '786d6b5697c4daeed2a490083836188360357ad011b6a8718b1f5a1a1bc58c70', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-07-12 14:18:14.311839 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3a1886440b3cbd98356c28181b355d0b65693ccf2bb538f123a3ff3c4c16109c', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-07-12 14:18:14.311853 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9bbd5f08ff529a2ab66c8542344779308be0ab713a11d86c4e960b33b86c8855', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-07-12 14:18:14.311866 | orchestrator | skipping: [testbed-node-5] => (item={'id': '967683bfbbb910dcb22e6e8a737876be6040b8a2df14d4ab6b07dfef66583aac', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-07-12 14:18:14.311879 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c53d018e0996de214d6672924bfea201ad2e44cca71f843ab9d30321805f4315', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-07-12 14:18:14.311892 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5b9c9af1d8a12ec49923515ced6d73d198f4210645291410bc7c51c52bfa9d7c', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-07-12 14:18:14.311905 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3c230e143d85b2109d054ae8e276e5bf6deabded2c835a6fdfe81a032a6c5037', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-07-12 14:18:14.311917 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'acca1121b514da327bbb275171f5a29e3c8941aab5a6cef3d51e6db00f5afbdb', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2025-07-12 14:18:14.311930 | orchestrator | skipping: [testbed-node-5] => (item={'id': '53c19ab986b46dd12d43bfeb5aaccc33fbf46413058a16a1bd39d21b3f8bf72f', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-07-12 14:18:14.311943 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7c2d31012039403370c0465edc05e8e5ba39a4c8055edd4589926af4e555b6a9', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-07-12 14:18:14.311955 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cd78e2f30a61b55bd6129b479902b5c52458cc0b8829f252e5ff391f530b08ec', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-07-12 14:18:14.311975 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd1f4b8b295b7e6051507b6f3c7664563d8ce62e98182d86bbf8eb69ae437664e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2025-07-12 14:18:21.966957 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5f3ceec538f5edd3ea59f3bcc9d3a7cb6a1284b9063c732dae469d2cd155becc', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 24 minutes'})  2025-07-12 14:18:21.967074 | orchestrator | ok: [testbed-node-5] => (item={'id': '96951cb5ff435f172488a3fb0919f2a9d3c5186ec53cdc7b7ebdb24e007d0ed2', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-07-12 14:18:21.967110 | orchestrator | ok: [testbed-node-5] => (item={'id': '287465026ec1c296489e41c585e20b8406603334b8022f037344c710e329dbbd', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-07-12 14:18:21.967147 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fd2584465b760f0fc1ba1f5e1bf9dd8e38d975e174b72cd6a8e18254033678ec', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-07-12 14:18:21.967162 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c32b038ac59ab1c921a0f2c513c9b21b423cb542b2aff0e57c035a37231a83c0', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-07-12 14:18:21.967175 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ca98af745d74c5419759cd15568d3fc39e3f19b1a007b7a51392fb78c8c977d9', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-07-12 14:18:21.967187 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1055db40eea65d77939aa99be56a957c9f3093a66d3b409e767ed65ce4c38a15', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2025-07-12 14:18:21.967198 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c9cd35dcd50bc466639d5cc3682f01e39e834f14cfb03200dd79aa649b04d619', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-07-12 14:18:21.967209 | orchestrator | skipping: [testbed-node-5] => (item={'id': '97c6747571a4bc57fa9bc74bab2bb5cedaeadf88bacb6b94a04767bcffe4daf6', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-07-12 14:18:21.967221 | orchestrator | 2025-07-12 14:18:21.967234 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-07-12 14:18:21.967246 | orchestrator | Saturday 12 July 2025 14:18:14 +0000 (0:00:00.515) 0:00:05.014 ********* 2025-07-12 14:18:21.967257 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:21.967269 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:18:21.967279 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:18:21.967290 | orchestrator | 2025-07-12 14:18:21.967301 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-07-12 14:18:21.967312 | orchestrator | Saturday 12 July 2025 14:18:14 +0000 (0:00:00.320) 0:00:05.335 ********* 2025-07-12 14:18:21.967323 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:18:21.967335 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:18:21.967346 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:18:21.967356 | orchestrator | 2025-07-12 14:18:21.967368 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-07-12 14:18:21.967380 | orchestrator | Saturday 12 July 2025 14:18:15 +0000 (0:00:00.311) 0:00:05.647 ********* 2025-07-12 14:18:21.967390 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:21.967401 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:18:21.967412 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:18:21.967423 | orchestrator | 2025-07-12 14:18:21.967434 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 14:18:21.967450 | orchestrator | Saturday 12 July 2025 14:18:15 +0000 (0:00:00.500) 0:00:06.147 ********* 2025-07-12 14:18:21.967461 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:21.967472 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:18:21.967483 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:18:21.967496 | orchestrator | 2025-07-12 14:18:21.967508 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-07-12 14:18:21.967520 | orchestrator | Saturday 12 July 2025 14:18:15 +0000 (0:00:00.301) 0:00:06.448 ********* 2025-07-12 14:18:21.967533 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-07-12 14:18:21.967555 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-07-12 14:18:21.967602 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:18:21.967623 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-07-12 14:18:21.967642 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-07-12 14:18:21.967677 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:18:21.967691 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-07-12 14:18:21.967703 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-07-12 14:18:21.967715 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:18:21.967727 | orchestrator | 2025-07-12 14:18:21.967739 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-07-12 14:18:21.967752 | orchestrator | Saturday 12 July 2025 14:18:16 +0000 (0:00:00.315) 0:00:06.764 ********* 2025-07-12 14:18:21.967763 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:21.967775 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:18:21.967788 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:18:21.967800 | orchestrator | 2025-07-12 14:18:21.967812 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-07-12 14:18:21.967824 | orchestrator | Saturday 12 July 2025 14:18:16 +0000 (0:00:00.295) 0:00:07.059 ********* 2025-07-12 14:18:21.967836 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:18:21.967848 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:18:21.967859 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:18:21.967870 | orchestrator | 2025-07-12 14:18:21.967880 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-07-12 14:18:21.967891 | orchestrator | Saturday 12 July 2025 14:18:16 +0000 (0:00:00.477) 0:00:07.537 ********* 2025-07-12 14:18:21.967902 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:18:21.967913 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:18:21.967923 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:18:21.967934 | orchestrator | 2025-07-12 14:18:21.967945 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-07-12 14:18:21.967956 | orchestrator | Saturday 12 July 2025 14:18:17 +0000 (0:00:00.315) 0:00:07.852 ********* 2025-07-12 14:18:21.967966 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:21.967977 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:18:21.967988 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:18:21.967999 | orchestrator | 2025-07-12 14:18:21.968009 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-12 14:18:21.968020 | orchestrator | Saturday 12 July 2025 14:18:17 +0000 (0:00:00.309) 0:00:08.162 ********* 2025-07-12 14:18:21.968031 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:18:21.968042 | orchestrator | 2025-07-12 14:18:21.968053 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-12 14:18:21.968064 | orchestrator | Saturday 12 July 2025 14:18:17 +0000 (0:00:00.261) 0:00:08.423 ********* 2025-07-12 14:18:21.968074 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:18:21.968085 | orchestrator | 2025-07-12 14:18:21.968096 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-12 14:18:21.968107 | orchestrator | Saturday 12 July 2025 14:18:18 +0000 (0:00:00.283) 0:00:08.707 ********* 2025-07-12 14:18:21.968117 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:18:21.968128 | orchestrator | 2025-07-12 14:18:21.968139 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:18:21.968149 | orchestrator | Saturday 12 July 2025 14:18:18 +0000 (0:00:00.250) 0:00:08.958 ********* 2025-07-12 14:18:21.968160 | orchestrator | 2025-07-12 14:18:21.968171 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:18:21.968189 | orchestrator | Saturday 12 July 2025 14:18:18 +0000 (0:00:00.074) 0:00:09.032 ********* 2025-07-12 14:18:21.968201 | orchestrator | 2025-07-12 14:18:21.968212 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:18:21.968222 | orchestrator | Saturday 12 July 2025 14:18:18 +0000 (0:00:00.079) 0:00:09.111 ********* 2025-07-12 14:18:21.968233 | orchestrator | 2025-07-12 14:18:21.968244 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-12 14:18:21.968255 | orchestrator | Saturday 12 July 2025 14:18:18 +0000 (0:00:00.246) 0:00:09.357 ********* 2025-07-12 14:18:21.968266 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:18:21.968277 | orchestrator | 2025-07-12 14:18:21.968288 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-07-12 14:18:21.968299 | orchestrator | Saturday 12 July 2025 14:18:18 +0000 (0:00:00.259) 0:00:09.617 ********* 2025-07-12 14:18:21.968310 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:18:21.968321 | orchestrator | 2025-07-12 14:18:21.968332 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 14:18:21.968343 | orchestrator | Saturday 12 July 2025 14:18:19 +0000 (0:00:00.261) 0:00:09.878 ********* 2025-07-12 14:18:21.968354 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:21.968365 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:18:21.968376 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:18:21.968387 | orchestrator | 2025-07-12 14:18:21.968398 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-07-12 14:18:21.968409 | orchestrator | Saturday 12 July 2025 14:18:19 +0000 (0:00:00.290) 0:00:10.169 ********* 2025-07-12 14:18:21.968420 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:21.968431 | orchestrator | 2025-07-12 14:18:21.968448 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-07-12 14:18:21.968460 | orchestrator | Saturday 12 July 2025 14:18:19 +0000 (0:00:00.231) 0:00:10.401 ********* 2025-07-12 14:18:21.968471 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-12 14:18:21.968482 | orchestrator | 2025-07-12 14:18:21.968493 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-07-12 14:18:21.968504 | orchestrator | Saturday 12 July 2025 14:18:21 +0000 (0:00:01.608) 0:00:12.009 ********* 2025-07-12 14:18:21.968515 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:21.968526 | orchestrator | 2025-07-12 14:18:21.968537 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-07-12 14:18:21.968548 | orchestrator | Saturday 12 July 2025 14:18:21 +0000 (0:00:00.141) 0:00:12.151 ********* 2025-07-12 14:18:21.968582 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:21.968594 | orchestrator | 2025-07-12 14:18:21.968606 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-07-12 14:18:21.968617 | orchestrator | Saturday 12 July 2025 14:18:21 +0000 (0:00:00.303) 0:00:12.454 ********* 2025-07-12 14:18:21.968635 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:18:34.905220 | orchestrator | 2025-07-12 14:18:34.905350 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-07-12 14:18:34.905368 | orchestrator | Saturday 12 July 2025 14:18:21 +0000 (0:00:00.130) 0:00:12.584 ********* 2025-07-12 14:18:34.905380 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:34.905392 | orchestrator | 2025-07-12 14:18:34.905436 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 14:18:34.905450 | orchestrator | Saturday 12 July 2025 14:18:22 +0000 (0:00:00.134) 0:00:12.719 ********* 2025-07-12 14:18:34.905477 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:34.905500 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:18:34.905511 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:18:34.905522 | orchestrator | 2025-07-12 14:18:34.905534 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-07-12 14:18:34.905545 | orchestrator | Saturday 12 July 2025 14:18:22 +0000 (0:00:00.503) 0:00:13.222 ********* 2025-07-12 14:18:34.905557 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:18:34.905628 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:18:34.905643 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:18:34.905654 | orchestrator | 2025-07-12 14:18:34.905665 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-07-12 14:18:34.905676 | orchestrator | Saturday 12 July 2025 14:18:25 +0000 (0:00:02.433) 0:00:15.656 ********* 2025-07-12 14:18:34.905687 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:34.905698 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:18:34.905709 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:18:34.905720 | orchestrator | 2025-07-12 14:18:34.905731 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-07-12 14:18:34.905743 | orchestrator | Saturday 12 July 2025 14:18:25 +0000 (0:00:00.294) 0:00:15.951 ********* 2025-07-12 14:18:34.905755 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:34.905767 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:18:34.905779 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:18:34.905791 | orchestrator | 2025-07-12 14:18:34.905802 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-07-12 14:18:34.905815 | orchestrator | Saturday 12 July 2025 14:18:25 +0000 (0:00:00.513) 0:00:16.464 ********* 2025-07-12 14:18:34.905827 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:18:34.905840 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:18:34.905851 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:18:34.905864 | orchestrator | 2025-07-12 14:18:34.905876 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-07-12 14:18:34.905888 | orchestrator | Saturday 12 July 2025 14:18:26 +0000 (0:00:00.508) 0:00:16.973 ********* 2025-07-12 14:18:34.905900 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:34.905913 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:18:34.905925 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:18:34.905937 | orchestrator | 2025-07-12 14:18:34.905950 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-07-12 14:18:34.905962 | orchestrator | Saturday 12 July 2025 14:18:26 +0000 (0:00:00.324) 0:00:17.298 ********* 2025-07-12 14:18:34.905974 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:18:34.905986 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:18:34.905999 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:18:34.906011 | orchestrator | 2025-07-12 14:18:34.906074 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-07-12 14:18:34.906086 | orchestrator | Saturday 12 July 2025 14:18:26 +0000 (0:00:00.286) 0:00:17.584 ********* 2025-07-12 14:18:34.906099 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:18:34.906111 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:18:34.906122 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:18:34.906133 | orchestrator | 2025-07-12 14:18:34.906143 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 14:18:34.906154 | orchestrator | Saturday 12 July 2025 14:18:27 +0000 (0:00:00.263) 0:00:17.847 ********* 2025-07-12 14:18:34.906165 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:34.906176 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:18:34.906187 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:18:34.906198 | orchestrator | 2025-07-12 14:18:34.906208 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-07-12 14:18:34.906219 | orchestrator | Saturday 12 July 2025 14:18:27 +0000 (0:00:00.707) 0:00:18.555 ********* 2025-07-12 14:18:34.906230 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:34.906241 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:18:34.906251 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:18:34.906262 | orchestrator | 2025-07-12 14:18:34.906273 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-07-12 14:18:34.906284 | orchestrator | Saturday 12 July 2025 14:18:28 +0000 (0:00:00.508) 0:00:19.063 ********* 2025-07-12 14:18:34.906295 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:34.906306 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:18:34.906317 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:18:34.906328 | orchestrator | 2025-07-12 14:18:34.906347 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-07-12 14:18:34.906359 | orchestrator | Saturday 12 July 2025 14:18:28 +0000 (0:00:00.340) 0:00:19.403 ********* 2025-07-12 14:18:34.906370 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:18:34.906381 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:18:34.906392 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:18:34.906403 | orchestrator | 2025-07-12 14:18:34.906414 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-07-12 14:18:34.906425 | orchestrator | Saturday 12 July 2025 14:18:29 +0000 (0:00:00.285) 0:00:19.689 ********* 2025-07-12 14:18:34.906436 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:34.906447 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:18:34.906458 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:18:34.906469 | orchestrator | 2025-07-12 14:18:34.906480 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-07-12 14:18:34.906491 | orchestrator | Saturday 12 July 2025 14:18:29 +0000 (0:00:00.546) 0:00:20.235 ********* 2025-07-12 14:18:34.906502 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 14:18:34.906513 | orchestrator | 2025-07-12 14:18:34.906524 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-07-12 14:18:34.906535 | orchestrator | Saturday 12 July 2025 14:18:29 +0000 (0:00:00.248) 0:00:20.484 ********* 2025-07-12 14:18:34.906546 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:18:34.906557 | orchestrator | 2025-07-12 14:18:34.906637 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-12 14:18:34.906651 | orchestrator | Saturday 12 July 2025 14:18:30 +0000 (0:00:00.235) 0:00:20.720 ********* 2025-07-12 14:18:34.906662 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 14:18:34.906673 | orchestrator | 2025-07-12 14:18:34.906684 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-12 14:18:34.906695 | orchestrator | Saturday 12 July 2025 14:18:31 +0000 (0:00:01.569) 0:00:22.289 ********* 2025-07-12 14:18:34.906705 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 14:18:34.906716 | orchestrator | 2025-07-12 14:18:34.906727 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-12 14:18:34.906738 | orchestrator | Saturday 12 July 2025 14:18:31 +0000 (0:00:00.281) 0:00:22.571 ********* 2025-07-12 14:18:34.906748 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 14:18:34.906759 | orchestrator | 2025-07-12 14:18:34.906770 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:18:34.906786 | orchestrator | Saturday 12 July 2025 14:18:32 +0000 (0:00:00.258) 0:00:22.829 ********* 2025-07-12 14:18:34.906805 | orchestrator | 2025-07-12 14:18:34.906823 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:18:34.906841 | orchestrator | Saturday 12 July 2025 14:18:32 +0000 (0:00:00.069) 0:00:22.899 ********* 2025-07-12 14:18:34.906860 | orchestrator | 2025-07-12 14:18:34.906879 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:18:34.906896 | orchestrator | Saturday 12 July 2025 14:18:32 +0000 (0:00:00.068) 0:00:22.968 ********* 2025-07-12 14:18:34.906907 | orchestrator | 2025-07-12 14:18:34.906918 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-07-12 14:18:34.906929 | orchestrator | Saturday 12 July 2025 14:18:32 +0000 (0:00:00.071) 0:00:23.039 ********* 2025-07-12 14:18:34.906940 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 14:18:34.906950 | orchestrator | 2025-07-12 14:18:34.906961 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-12 14:18:34.906972 | orchestrator | Saturday 12 July 2025 14:18:34 +0000 (0:00:01.591) 0:00:24.631 ********* 2025-07-12 14:18:34.906983 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-07-12 14:18:34.906994 | orchestrator |  "msg": [ 2025-07-12 14:18:34.907006 | orchestrator |  "Validator run completed.", 2025-07-12 14:18:34.907029 | orchestrator |  "You can find the report file here:", 2025-07-12 14:18:34.907040 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-07-12T14:18:10+00:00-report.json", 2025-07-12 14:18:34.907052 | orchestrator |  "on the following host:", 2025-07-12 14:18:34.907063 | orchestrator |  "testbed-manager" 2025-07-12 14:18:34.907074 | orchestrator |  ] 2025-07-12 14:18:34.907085 | orchestrator | } 2025-07-12 14:18:34.907097 | orchestrator | 2025-07-12 14:18:34.907107 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:18:34.907119 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-07-12 14:18:34.907131 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-12 14:18:34.907142 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-12 14:18:34.907153 | orchestrator | 2025-07-12 14:18:34.907164 | orchestrator | 2025-07-12 14:18:34.907175 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:18:34.907186 | orchestrator | Saturday 12 July 2025 14:18:34 +0000 (0:00:00.869) 0:00:25.501 ********* 2025-07-12 14:18:34.907196 | orchestrator | =============================================================================== 2025-07-12 14:18:34.907207 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.43s 2025-07-12 14:18:34.907218 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.61s 2025-07-12 14:18:34.907229 | orchestrator | Write report file ------------------------------------------------------- 1.59s 2025-07-12 14:18:34.907239 | orchestrator | Aggregate test results step one ----------------------------------------- 1.57s 2025-07-12 14:18:34.907300 | orchestrator | Create report output directory ------------------------------------------ 1.01s 2025-07-12 14:18:34.907312 | orchestrator | Print report file information ------------------------------------------- 0.87s 2025-07-12 14:18:34.907327 | orchestrator | Prepare test data ------------------------------------------------------- 0.71s 2025-07-12 14:18:34.907338 | orchestrator | Get timestamp for report file ------------------------------------------- 0.68s 2025-07-12 14:18:34.907349 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.55s 2025-07-12 14:18:34.907360 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.52s 2025-07-12 14:18:34.907371 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.52s 2025-07-12 14:18:34.907382 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.51s 2025-07-12 14:18:34.907392 | orchestrator | Prepare test data ------------------------------------------------------- 0.51s 2025-07-12 14:18:34.907403 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.51s 2025-07-12 14:18:34.907414 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.51s 2025-07-12 14:18:34.907425 | orchestrator | Prepare test data ------------------------------------------------------- 0.50s 2025-07-12 14:18:34.907445 | orchestrator | Set test result to passed if count matches ------------------------------ 0.50s 2025-07-12 14:18:35.200520 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.48s 2025-07-12 14:18:35.200667 | orchestrator | Flush handlers ---------------------------------------------------------- 0.40s 2025-07-12 14:18:35.200686 | orchestrator | Calculate OSD devices for each host ------------------------------------- 0.34s 2025-07-12 14:18:35.498188 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-07-12 14:18:35.505486 | orchestrator | + set -e 2025-07-12 14:18:35.505546 | orchestrator | + source /opt/manager-vars.sh 2025-07-12 14:18:35.505568 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-12 14:18:35.505644 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-12 14:18:35.505663 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-12 14:18:35.505725 | orchestrator | ++ CEPH_VERSION=reef 2025-07-12 14:18:35.505746 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-12 14:18:35.505766 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-12 14:18:35.505784 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-07-12 14:18:35.505954 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-07-12 14:18:35.505985 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-12 14:18:35.506003 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-12 14:18:35.506091 | orchestrator | ++ export ARA=false 2025-07-12 14:18:35.506106 | orchestrator | ++ ARA=false 2025-07-12 14:18:35.506117 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-12 14:18:35.506128 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-12 14:18:35.506139 | orchestrator | ++ export TEMPEST=false 2025-07-12 14:18:35.506150 | orchestrator | ++ TEMPEST=false 2025-07-12 14:18:35.506160 | orchestrator | ++ export IS_ZUUL=true 2025-07-12 14:18:35.506171 | orchestrator | ++ IS_ZUUL=true 2025-07-12 14:18:35.506184 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2025-07-12 14:18:35.506204 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2025-07-12 14:18:35.506222 | orchestrator | ++ export EXTERNAL_API=false 2025-07-12 14:18:35.506240 | orchestrator | ++ EXTERNAL_API=false 2025-07-12 14:18:35.506260 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-12 14:18:35.506279 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-12 14:18:35.506290 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-12 14:18:35.506301 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-12 14:18:35.506312 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-12 14:18:35.506323 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-12 14:18:35.506345 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-07-12 14:18:35.506356 | orchestrator | + source /etc/os-release 2025-07-12 14:18:35.506367 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.2 LTS' 2025-07-12 14:18:35.506378 | orchestrator | ++ NAME=Ubuntu 2025-07-12 14:18:35.506389 | orchestrator | ++ VERSION_ID=24.04 2025-07-12 14:18:35.506399 | orchestrator | ++ VERSION='24.04.2 LTS (Noble Numbat)' 2025-07-12 14:18:35.506410 | orchestrator | ++ VERSION_CODENAME=noble 2025-07-12 14:18:35.506421 | orchestrator | ++ ID=ubuntu 2025-07-12 14:18:35.506432 | orchestrator | ++ ID_LIKE=debian 2025-07-12 14:18:35.506443 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-07-12 14:18:35.506454 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-07-12 14:18:35.506464 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-07-12 14:18:35.506476 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-07-12 14:18:35.506487 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-07-12 14:18:35.506498 | orchestrator | ++ LOGO=ubuntu-logo 2025-07-12 14:18:35.506509 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-07-12 14:18:35.506521 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-07-12 14:18:35.506539 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-07-12 14:18:35.535215 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-07-12 14:18:56.871877 | orchestrator | 2025-07-12 14:18:56.871984 | orchestrator | # Status of Elasticsearch 2025-07-12 14:18:56.872000 | orchestrator | 2025-07-12 14:18:56.872012 | orchestrator | + pushd /opt/configuration/contrib 2025-07-12 14:18:56.872024 | orchestrator | + echo 2025-07-12 14:18:56.872036 | orchestrator | + echo '# Status of Elasticsearch' 2025-07-12 14:18:56.872047 | orchestrator | + echo 2025-07-12 14:18:56.872058 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-07-12 14:18:57.057023 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-07-12 14:18:57.057122 | orchestrator | 2025-07-12 14:18:57.057129 | orchestrator | # Status of MariaDB 2025-07-12 14:18:57.057136 | orchestrator | + echo 2025-07-12 14:18:57.057141 | orchestrator | + echo '# Status of MariaDB' 2025-07-12 14:18:57.057145 | orchestrator | + echo 2025-07-12 14:18:57.057150 | orchestrator | 2025-07-12 14:18:57.057154 | orchestrator | + MARIADB_USER=root_shard_0 2025-07-12 14:18:57.057167 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-07-12 14:18:57.123892 | orchestrator | Reading package lists... 2025-07-12 14:18:57.458886 | orchestrator | Building dependency tree... 2025-07-12 14:18:57.459333 | orchestrator | Reading state information... 2025-07-12 14:18:57.832009 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-07-12 14:18:57.832118 | orchestrator | bc set to manually installed. 2025-07-12 14:18:57.832134 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-07-12 14:18:58.537742 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-07-12 14:18:58.538564 | orchestrator | 2025-07-12 14:18:58.538598 | orchestrator | # Status of Prometheus 2025-07-12 14:18:58.538640 | orchestrator | 2025-07-12 14:18:58.538653 | orchestrator | + echo 2025-07-12 14:18:58.538666 | orchestrator | + echo '# Status of Prometheus' 2025-07-12 14:18:58.538677 | orchestrator | + echo 2025-07-12 14:18:58.538689 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-07-12 14:18:58.595773 | orchestrator | Unauthorized 2025-07-12 14:18:58.596195 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-07-12 14:18:58.648975 | orchestrator | Unauthorized 2025-07-12 14:18:58.650728 | orchestrator | 2025-07-12 14:18:58.650760 | orchestrator | # Status of RabbitMQ 2025-07-12 14:18:58.650773 | orchestrator | 2025-07-12 14:18:58.650784 | orchestrator | + echo 2025-07-12 14:18:58.650796 | orchestrator | + echo '# Status of RabbitMQ' 2025-07-12 14:18:58.650807 | orchestrator | + echo 2025-07-12 14:18:58.650838 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-07-12 14:18:59.129148 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-07-12 14:18:59.138064 | orchestrator | 2025-07-12 14:18:59.138104 | orchestrator | # Status of Redis 2025-07-12 14:18:59.138117 | orchestrator | 2025-07-12 14:18:59.138128 | orchestrator | + echo 2025-07-12 14:18:59.138138 | orchestrator | + echo '# Status of Redis' 2025-07-12 14:18:59.138149 | orchestrator | + echo 2025-07-12 14:18:59.138161 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-07-12 14:18:59.144882 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001757s;;;0.000000;10.000000 2025-07-12 14:18:59.145323 | orchestrator | + popd 2025-07-12 14:18:59.145564 | orchestrator | 2025-07-12 14:18:59.145584 | orchestrator | # Create backup of MariaDB database 2025-07-12 14:18:59.145595 | orchestrator | 2025-07-12 14:18:59.145645 | orchestrator | + echo 2025-07-12 14:18:59.145658 | orchestrator | + echo '# Create backup of MariaDB database' 2025-07-12 14:18:59.145668 | orchestrator | + echo 2025-07-12 14:18:59.145678 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-07-12 14:19:00.999962 | orchestrator | 2025-07-12 14:19:00 | INFO  | Task 1968b30e-f918-476e-81ac-840bf608eee9 (mariadb_backup) was prepared for execution. 2025-07-12 14:19:01.000064 | orchestrator | 2025-07-12 14:19:00 | INFO  | It takes a moment until task 1968b30e-f918-476e-81ac-840bf608eee9 (mariadb_backup) has been started and output is visible here. 2025-07-12 14:20:00.432967 | orchestrator | 2025-07-12 14:20:00.433088 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 14:20:00.433106 | orchestrator | 2025-07-12 14:20:00.433119 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 14:20:00.433131 | orchestrator | Saturday 12 July 2025 14:19:04 +0000 (0:00:00.178) 0:00:00.178 ********* 2025-07-12 14:20:00.433143 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:20:00.433155 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:20:00.433166 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:20:00.433177 | orchestrator | 2025-07-12 14:20:00.433188 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 14:20:00.433199 | orchestrator | Saturday 12 July 2025 14:19:05 +0000 (0:00:00.313) 0:00:00.492 ********* 2025-07-12 14:20:00.433230 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-07-12 14:20:00.433243 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-07-12 14:20:00.433254 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-07-12 14:20:00.433265 | orchestrator | 2025-07-12 14:20:00.433276 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-07-12 14:20:00.433287 | orchestrator | 2025-07-12 14:20:00.433324 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-07-12 14:20:00.433336 | orchestrator | Saturday 12 July 2025 14:19:05 +0000 (0:00:00.576) 0:00:01.068 ********* 2025-07-12 14:20:00.433348 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 14:20:00.433359 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-07-12 14:20:00.433370 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-07-12 14:20:00.433381 | orchestrator | 2025-07-12 14:20:00.433469 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-12 14:20:00.433481 | orchestrator | Saturday 12 July 2025 14:19:06 +0000 (0:00:00.382) 0:00:01.451 ********* 2025-07-12 14:20:00.433492 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:20:00.433506 | orchestrator | 2025-07-12 14:20:00.433518 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-07-12 14:20:00.433530 | orchestrator | Saturday 12 July 2025 14:19:06 +0000 (0:00:00.526) 0:00:01.977 ********* 2025-07-12 14:20:00.433542 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:20:00.433555 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:20:00.433567 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:20:00.433580 | orchestrator | 2025-07-12 14:20:00.433592 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-07-12 14:20:00.433605 | orchestrator | Saturday 12 July 2025 14:19:09 +0000 (0:00:02.960) 0:00:04.938 ********* 2025-07-12 14:20:00.433618 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-07-12 14:20:00.433630 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-07-12 14:20:00.433643 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-07-12 14:20:00.433655 | orchestrator | mariadb_bootstrap_restart 2025-07-12 14:20:00.433690 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:20:00.433704 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:20:00.433716 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:20:00.433729 | orchestrator | 2025-07-12 14:20:00.433741 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-07-12 14:20:00.433753 | orchestrator | skipping: no hosts matched 2025-07-12 14:20:00.433765 | orchestrator | 2025-07-12 14:20:00.433778 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-07-12 14:20:00.433790 | orchestrator | skipping: no hosts matched 2025-07-12 14:20:00.433803 | orchestrator | 2025-07-12 14:20:00.433815 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-07-12 14:20:00.433828 | orchestrator | skipping: no hosts matched 2025-07-12 14:20:00.433842 | orchestrator | 2025-07-12 14:20:00.433855 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-07-12 14:20:00.433866 | orchestrator | 2025-07-12 14:20:00.433876 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-07-12 14:20:00.433887 | orchestrator | Saturday 12 July 2025 14:19:59 +0000 (0:00:49.798) 0:00:54.737 ********* 2025-07-12 14:20:00.433898 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:20:00.433909 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:20:00.433919 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:20:00.433930 | orchestrator | 2025-07-12 14:20:00.433941 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-07-12 14:20:00.433966 | orchestrator | Saturday 12 July 2025 14:19:59 +0000 (0:00:00.311) 0:00:55.048 ********* 2025-07-12 14:20:00.433977 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:20:00.433988 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:20:00.433999 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:20:00.434010 | orchestrator | 2025-07-12 14:20:00.434077 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:20:00.434090 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 14:20:00.434113 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 14:20:00.434124 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 14:20:00.434135 | orchestrator | 2025-07-12 14:20:00.434146 | orchestrator | 2025-07-12 14:20:00.434157 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:20:00.434168 | orchestrator | Saturday 12 July 2025 14:20:00 +0000 (0:00:00.214) 0:00:55.262 ********* 2025-07-12 14:20:00.434179 | orchestrator | =============================================================================== 2025-07-12 14:20:00.434190 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 49.80s 2025-07-12 14:20:00.434220 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 2.96s 2025-07-12 14:20:00.434232 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.58s 2025-07-12 14:20:00.434243 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.53s 2025-07-12 14:20:00.434254 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.38s 2025-07-12 14:20:00.434265 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-07-12 14:20:00.434276 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.31s 2025-07-12 14:20:00.434287 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.21s 2025-07-12 14:20:00.709129 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-07-12 14:20:00.716363 | orchestrator | + set -e 2025-07-12 14:20:00.717850 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-12 14:20:00.717896 | orchestrator | ++ export INTERACTIVE=false 2025-07-12 14:20:00.717917 | orchestrator | ++ INTERACTIVE=false 2025-07-12 14:20:00.717935 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-12 14:20:00.717954 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-12 14:20:00.717974 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-07-12 14:20:00.718331 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-07-12 14:20:00.724752 | orchestrator | 2025-07-12 14:20:00.724792 | orchestrator | # OpenStack endpoints 2025-07-12 14:20:00.724805 | orchestrator | 2025-07-12 14:20:00.724816 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-07-12 14:20:00.724828 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-07-12 14:20:00.724839 | orchestrator | + export OS_CLOUD=admin 2025-07-12 14:20:00.724850 | orchestrator | + OS_CLOUD=admin 2025-07-12 14:20:00.724861 | orchestrator | + echo 2025-07-12 14:20:00.724872 | orchestrator | + echo '# OpenStack endpoints' 2025-07-12 14:20:00.724883 | orchestrator | + echo 2025-07-12 14:20:00.724894 | orchestrator | + openstack endpoint list 2025-07-12 14:20:04.247116 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-07-12 14:20:04.247193 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-07-12 14:20:04.247199 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-07-12 14:20:04.247204 | orchestrator | | 092baffed66c4cc1ba25c2df4328843b | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-07-12 14:20:04.247208 | orchestrator | | 0cc237754875407a97c99c544b5f7ff2 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-07-12 14:20:04.247212 | orchestrator | | 0d5b548c0d9542b386dcb84643ab4368 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-07-12 14:20:04.247216 | orchestrator | | 2328af085a644fcead1b8a70b9a7f6ba | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-07-12 14:20:04.247241 | orchestrator | | 2bc9ac61d5ea4f378ab8c83e9f52df26 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-07-12 14:20:04.247245 | orchestrator | | 3c543ca9f1614031b9de37f49711536e | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-07-12 14:20:04.247248 | orchestrator | | 3db82b888e7b4c108d4122e99960646f | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-07-12 14:20:04.247252 | orchestrator | | 3e9a5b433f594f3a841e7bb4a6741863 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-07-12 14:20:04.247256 | orchestrator | | 4909457aef9b4e239335bf25f60cfd30 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-07-12 14:20:04.247260 | orchestrator | | 5ae36098099a40e2a00495879bb57be5 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-07-12 14:20:04.247264 | orchestrator | | 6b76515f68884219991a725162f94059 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-07-12 14:20:04.247268 | orchestrator | | 6f42709573dd4f758b2e30982f5de2c0 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-07-12 14:20:04.247272 | orchestrator | | 70c181f7f10149429849506225ddbdc2 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-07-12 14:20:04.247276 | orchestrator | | 7a874a7ffb634b5b8a704b122a0abfdb | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-07-12 14:20:04.247280 | orchestrator | | 8d469d0431b74b2697c18fc9304a6128 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-07-12 14:20:04.247283 | orchestrator | | 91d89669f87b4dd1a8e93bf4a5b48736 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-07-12 14:20:04.247287 | orchestrator | | 9d0e3229807c4dfcaec00d62b3366650 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-07-12 14:20:04.247291 | orchestrator | | a10dabda270d41c09cc54c303aaf9938 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-07-12 14:20:04.247295 | orchestrator | | a93b1072b53045378bfa1847eac7a2c1 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-07-12 14:20:04.247298 | orchestrator | | bf7871ba35a64bf9ab86f72ee1b77cdc | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-07-12 14:20:04.247312 | orchestrator | | f2321d5e46924a5cb861c1d53f7a5ab0 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-07-12 14:20:04.247316 | orchestrator | | ff2a77461b9c4abda921468c176a2b7c | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-07-12 14:20:04.247320 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-07-12 14:20:04.511740 | orchestrator | 2025-07-12 14:20:04.511839 | orchestrator | # Cinder 2025-07-12 14:20:04.511854 | orchestrator | 2025-07-12 14:20:04.511890 | orchestrator | + echo 2025-07-12 14:20:04.511902 | orchestrator | + echo '# Cinder' 2025-07-12 14:20:04.511914 | orchestrator | + echo 2025-07-12 14:20:04.511927 | orchestrator | + openstack volume service list 2025-07-12 14:20:07.744765 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-07-12 14:20:07.744878 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-07-12 14:20:07.744894 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-07-12 14:20:07.744906 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-07-12T14:19:59.000000 | 2025-07-12 14:20:07.744917 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-07-12T14:19:59.000000 | 2025-07-12 14:20:07.744928 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-07-12T14:19:59.000000 | 2025-07-12 14:20:07.744940 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-07-12T14:19:59.000000 | 2025-07-12 14:20:07.744950 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-07-12T14:20:02.000000 | 2025-07-12 14:20:07.744961 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-07-12T14:20:02.000000 | 2025-07-12 14:20:07.744972 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-07-12T14:20:06.000000 | 2025-07-12 14:20:07.744983 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-07-12T14:20:07.000000 | 2025-07-12 14:20:07.744994 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-07-12T14:20:07.000000 | 2025-07-12 14:20:07.745005 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-07-12 14:20:08.023447 | orchestrator | 2025-07-12 14:20:08.023545 | orchestrator | # Neutron 2025-07-12 14:20:08.023559 | orchestrator | 2025-07-12 14:20:08.023571 | orchestrator | + echo 2025-07-12 14:20:08.023583 | orchestrator | + echo '# Neutron' 2025-07-12 14:20:08.023618 | orchestrator | + echo 2025-07-12 14:20:08.023631 | orchestrator | + openstack network agent list 2025-07-12 14:20:10.666590 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-07-12 14:20:10.666736 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-07-12 14:20:10.666757 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-07-12 14:20:10.666767 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-07-12 14:20:10.666776 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-07-12 14:20:10.666785 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-07-12 14:20:10.666794 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-07-12 14:20:10.666803 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-07-12 14:20:10.666811 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-07-12 14:20:10.666820 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-07-12 14:20:10.666856 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-07-12 14:20:10.666866 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-07-12 14:20:10.666875 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-07-12 14:20:10.925269 | orchestrator | + openstack network service provider list 2025-07-12 14:20:13.623377 | orchestrator | +---------------+------+---------+ 2025-07-12 14:20:13.623518 | orchestrator | | Service Type | Name | Default | 2025-07-12 14:20:13.623545 | orchestrator | +---------------+------+---------+ 2025-07-12 14:20:13.623563 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-07-12 14:20:13.623590 | orchestrator | +---------------+------+---------+ 2025-07-12 14:20:13.880596 | orchestrator | 2025-07-12 14:20:13.880758 | orchestrator | # Nova 2025-07-12 14:20:13.880775 | orchestrator | 2025-07-12 14:20:13.880787 | orchestrator | + echo 2025-07-12 14:20:13.880799 | orchestrator | + echo '# Nova' 2025-07-12 14:20:13.880811 | orchestrator | + echo 2025-07-12 14:20:13.880823 | orchestrator | + openstack compute service list 2025-07-12 14:20:17.104527 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-07-12 14:20:17.104631 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-07-12 14:20:17.104646 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-07-12 14:20:17.104658 | orchestrator | | cd8eb948-7f66-43d2-baa7-514a76b62e8b | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-07-12T14:20:15.000000 | 2025-07-12 14:20:17.104670 | orchestrator | | 26327e0e-e4f1-4160-a20d-97c2b789fcd2 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-07-12T14:20:09.000000 | 2025-07-12 14:20:17.104681 | orchestrator | | 24db2b63-e02a-477c-895a-61d4f7f4b9a6 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-07-12T14:20:10.000000 | 2025-07-12 14:20:17.104750 | orchestrator | | 32456944-cc9a-4998-8737-474950b18120 | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-07-12T14:20:13.000000 | 2025-07-12 14:20:17.104762 | orchestrator | | 1f75a7c2-43a0-4ba3-9081-22a377b1d099 | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-07-12T14:20:15.000000 | 2025-07-12 14:20:17.104773 | orchestrator | | fb78daf0-c0e5-46c3-a279-a125683b4250 | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-07-12T14:20:15.000000 | 2025-07-12 14:20:17.104784 | orchestrator | | 23ffeef8-f8ca-4c9a-88f1-a28003b7beef | nova-compute | testbed-node-3 | nova | enabled | up | 2025-07-12T14:20:16.000000 | 2025-07-12 14:20:17.104795 | orchestrator | | 9c2349f3-6d8c-45df-b4fe-e135174b6115 | nova-compute | testbed-node-4 | nova | enabled | up | 2025-07-12T14:20:16.000000 | 2025-07-12 14:20:17.104806 | orchestrator | | 2811bd46-e6b1-43a8-b7ce-3e4bc08fd0e8 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-07-12T14:20:16.000000 | 2025-07-12 14:20:17.104817 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-07-12 14:20:17.374409 | orchestrator | + openstack hypervisor list 2025-07-12 14:20:21.707926 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-07-12 14:20:21.708028 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-07-12 14:20:21.708042 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-07-12 14:20:21.708053 | orchestrator | | 525e76da-4545-4916-ab16-0610db1a0985 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-07-12 14:20:21.708064 | orchestrator | | 2d55df39-6e79-424e-a000-2b29f067b44e | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-07-12 14:20:21.708075 | orchestrator | | 81c946f9-7127-479a-a0d0-3c8b8708db3e | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-07-12 14:20:21.708111 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-07-12 14:20:21.979087 | orchestrator | 2025-07-12 14:20:21.979162 | orchestrator | # Run OpenStack test play 2025-07-12 14:20:21.979176 | orchestrator | 2025-07-12 14:20:21.979187 | orchestrator | + echo 2025-07-12 14:20:21.979197 | orchestrator | + echo '# Run OpenStack test play' 2025-07-12 14:20:21.979208 | orchestrator | + echo 2025-07-12 14:20:21.979218 | orchestrator | + osism apply --environment openstack test 2025-07-12 14:20:23.746826 | orchestrator | 2025-07-12 14:20:23 | INFO  | Trying to run play test in environment openstack 2025-07-12 14:20:33.923427 | orchestrator | 2025-07-12 14:20:33 | INFO  | Task 5d5f98de-bd35-4e47-b6d0-4c4ac71d5121 (test) was prepared for execution. 2025-07-12 14:20:33.923540 | orchestrator | 2025-07-12 14:20:33 | INFO  | It takes a moment until task 5d5f98de-bd35-4e47-b6d0-4c4ac71d5121 (test) has been started and output is visible here. 2025-07-12 14:26:26.654795 | orchestrator | 2025-07-12 14:26:26.654919 | orchestrator | PLAY [Create test project] ***************************************************** 2025-07-12 14:26:26.655003 | orchestrator | 2025-07-12 14:26:26.655017 | orchestrator | TASK [Create test domain] ****************************************************** 2025-07-12 14:26:26.655029 | orchestrator | Saturday 12 July 2025 14:20:37 +0000 (0:00:00.077) 0:00:00.077 ********* 2025-07-12 14:26:26.655041 | orchestrator | changed: [localhost] 2025-07-12 14:26:26.655054 | orchestrator | 2025-07-12 14:26:26.655065 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-07-12 14:26:26.655076 | orchestrator | Saturday 12 July 2025 14:20:41 +0000 (0:00:03.412) 0:00:03.490 ********* 2025-07-12 14:26:26.655088 | orchestrator | changed: [localhost] 2025-07-12 14:26:26.655099 | orchestrator | 2025-07-12 14:26:26.655110 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-07-12 14:26:26.655121 | orchestrator | Saturday 12 July 2025 14:20:45 +0000 (0:00:03.925) 0:00:07.416 ********* 2025-07-12 14:26:26.655132 | orchestrator | changed: [localhost] 2025-07-12 14:26:26.655143 | orchestrator | 2025-07-12 14:26:26.655154 | orchestrator | TASK [Create test project] ***************************************************** 2025-07-12 14:26:26.655165 | orchestrator | Saturday 12 July 2025 14:20:51 +0000 (0:00:06.361) 0:00:13.777 ********* 2025-07-12 14:26:26.655257 | orchestrator | changed: [localhost] 2025-07-12 14:26:26.655270 | orchestrator | 2025-07-12 14:26:26.655282 | orchestrator | TASK [Create test user] ******************************************************** 2025-07-12 14:26:26.655293 | orchestrator | Saturday 12 July 2025 14:20:55 +0000 (0:00:03.996) 0:00:17.774 ********* 2025-07-12 14:26:26.655304 | orchestrator | changed: [localhost] 2025-07-12 14:26:26.655317 | orchestrator | 2025-07-12 14:26:26.655329 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-07-12 14:26:26.655342 | orchestrator | Saturday 12 July 2025 14:20:59 +0000 (0:00:04.213) 0:00:21.988 ********* 2025-07-12 14:26:26.655354 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-07-12 14:26:26.655368 | orchestrator | changed: [localhost] => (item=member) 2025-07-12 14:26:26.655381 | orchestrator | changed: [localhost] => (item=creator) 2025-07-12 14:26:26.655393 | orchestrator | 2025-07-12 14:26:26.655405 | orchestrator | TASK [Create test server group] ************************************************ 2025-07-12 14:26:26.655417 | orchestrator | Saturday 12 July 2025 14:21:11 +0000 (0:00:12.122) 0:00:34.110 ********* 2025-07-12 14:26:26.655430 | orchestrator | changed: [localhost] 2025-07-12 14:26:26.655442 | orchestrator | 2025-07-12 14:26:26.655454 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-07-12 14:26:26.655466 | orchestrator | Saturday 12 July 2025 14:21:16 +0000 (0:00:04.235) 0:00:38.345 ********* 2025-07-12 14:26:26.655478 | orchestrator | changed: [localhost] 2025-07-12 14:26:26.655490 | orchestrator | 2025-07-12 14:26:26.655502 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-07-12 14:26:26.655515 | orchestrator | Saturday 12 July 2025 14:21:21 +0000 (0:00:05.449) 0:00:43.795 ********* 2025-07-12 14:26:26.655555 | orchestrator | changed: [localhost] 2025-07-12 14:26:26.655568 | orchestrator | 2025-07-12 14:26:26.655580 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-07-12 14:26:26.655592 | orchestrator | Saturday 12 July 2025 14:21:25 +0000 (0:00:04.122) 0:00:47.918 ********* 2025-07-12 14:26:26.655605 | orchestrator | changed: [localhost] 2025-07-12 14:26:26.655617 | orchestrator | 2025-07-12 14:26:26.655629 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-07-12 14:26:26.655641 | orchestrator | Saturday 12 July 2025 14:21:29 +0000 (0:00:03.928) 0:00:51.846 ********* 2025-07-12 14:26:26.655653 | orchestrator | changed: [localhost] 2025-07-12 14:26:26.655666 | orchestrator | 2025-07-12 14:26:26.655678 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-07-12 14:26:26.655689 | orchestrator | Saturday 12 July 2025 14:21:33 +0000 (0:00:04.288) 0:00:56.135 ********* 2025-07-12 14:26:26.655699 | orchestrator | changed: [localhost] 2025-07-12 14:26:26.655710 | orchestrator | 2025-07-12 14:26:26.655721 | orchestrator | TASK [Create test network topology] ******************************************** 2025-07-12 14:26:26.655732 | orchestrator | Saturday 12 July 2025 14:21:37 +0000 (0:00:03.849) 0:00:59.984 ********* 2025-07-12 14:26:26.655743 | orchestrator | changed: [localhost] 2025-07-12 14:26:26.655753 | orchestrator | 2025-07-12 14:26:26.655764 | orchestrator | TASK [Create test instances] *************************************************** 2025-07-12 14:26:26.655790 | orchestrator | Saturday 12 July 2025 14:21:53 +0000 (0:00:15.845) 0:01:15.830 ********* 2025-07-12 14:26:26.655801 | orchestrator | changed: [localhost] => (item=test) 2025-07-12 14:26:26.655813 | orchestrator | changed: [localhost] => (item=test-1) 2025-07-12 14:26:26.655824 | orchestrator | changed: [localhost] => (item=test-2) 2025-07-12 14:26:26.655835 | orchestrator | 2025-07-12 14:26:26.655845 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-07-12 14:26:26.655856 | orchestrator | changed: [localhost] => (item=test-3) 2025-07-12 14:26:26.655867 | orchestrator | 2025-07-12 14:26:26.655878 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-07-12 14:26:26.655889 | orchestrator | changed: [localhost] => (item=test-4) 2025-07-12 14:26:26.655899 | orchestrator | 2025-07-12 14:26:26.655910 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-07-12 14:26:26.655921 | orchestrator | Saturday 12 July 2025 14:25:03 +0000 (0:03:09.599) 0:04:25.430 ********* 2025-07-12 14:26:26.655969 | orchestrator | changed: [localhost] => (item=test) 2025-07-12 14:26:26.655981 | orchestrator | changed: [localhost] => (item=test-1) 2025-07-12 14:26:26.655992 | orchestrator | changed: [localhost] => (item=test-2) 2025-07-12 14:26:26.656003 | orchestrator | changed: [localhost] => (item=test-3) 2025-07-12 14:26:26.656014 | orchestrator | changed: [localhost] => (item=test-4) 2025-07-12 14:26:26.656025 | orchestrator | 2025-07-12 14:26:26.656036 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-07-12 14:26:26.656047 | orchestrator | Saturday 12 July 2025 14:25:26 +0000 (0:00:23.724) 0:04:49.154 ********* 2025-07-12 14:26:26.656058 | orchestrator | changed: [localhost] => (item=test) 2025-07-12 14:26:26.656069 | orchestrator | changed: [localhost] => (item=test-1) 2025-07-12 14:26:26.656080 | orchestrator | changed: [localhost] => (item=test-2) 2025-07-12 14:26:26.656091 | orchestrator | changed: [localhost] => (item=test-3) 2025-07-12 14:26:26.656122 | orchestrator | changed: [localhost] => (item=test-4) 2025-07-12 14:26:26.656134 | orchestrator | 2025-07-12 14:26:26.656145 | orchestrator | TASK [Create test volume] ****************************************************** 2025-07-12 14:26:26.656156 | orchestrator | Saturday 12 July 2025 14:26:00 +0000 (0:00:33.720) 0:05:22.875 ********* 2025-07-12 14:26:26.656167 | orchestrator | changed: [localhost] 2025-07-12 14:26:26.656178 | orchestrator | 2025-07-12 14:26:26.656189 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-07-12 14:26:26.656200 | orchestrator | Saturday 12 July 2025 14:26:07 +0000 (0:00:06.827) 0:05:29.702 ********* 2025-07-12 14:26:26.656211 | orchestrator | changed: [localhost] 2025-07-12 14:26:26.656234 | orchestrator | 2025-07-12 14:26:26.656245 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-07-12 14:26:26.656256 | orchestrator | Saturday 12 July 2025 14:26:21 +0000 (0:00:13.706) 0:05:43.409 ********* 2025-07-12 14:26:26.656267 | orchestrator | ok: [localhost] 2025-07-12 14:26:26.656279 | orchestrator | 2025-07-12 14:26:26.656291 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-07-12 14:26:26.656301 | orchestrator | Saturday 12 July 2025 14:26:26 +0000 (0:00:05.161) 0:05:48.570 ********* 2025-07-12 14:26:26.656312 | orchestrator | ok: [localhost] => { 2025-07-12 14:26:26.656323 | orchestrator |  "msg": "192.168.112.153" 2025-07-12 14:26:26.656335 | orchestrator | } 2025-07-12 14:26:26.656346 | orchestrator | 2025-07-12 14:26:26.656357 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:26:26.656368 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:26:26.656380 | orchestrator | 2025-07-12 14:26:26.656391 | orchestrator | 2025-07-12 14:26:26.656402 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:26:26.656413 | orchestrator | Saturday 12 July 2025 14:26:26 +0000 (0:00:00.052) 0:05:48.623 ********* 2025-07-12 14:26:26.656424 | orchestrator | =============================================================================== 2025-07-12 14:26:26.656440 | orchestrator | Create test instances ------------------------------------------------- 189.60s 2025-07-12 14:26:26.656451 | orchestrator | Add tag to instances --------------------------------------------------- 33.72s 2025-07-12 14:26:26.656462 | orchestrator | Add metadata to instances ---------------------------------------------- 23.72s 2025-07-12 14:26:26.656487 | orchestrator | Create test network topology ------------------------------------------- 15.85s 2025-07-12 14:26:26.656498 | orchestrator | Attach test volume ----------------------------------------------------- 13.71s 2025-07-12 14:26:26.656509 | orchestrator | Add member roles to user test ------------------------------------------ 12.12s 2025-07-12 14:26:26.656520 | orchestrator | Create test volume ------------------------------------------------------ 6.83s 2025-07-12 14:26:26.656531 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.36s 2025-07-12 14:26:26.656542 | orchestrator | Create ssh security group ----------------------------------------------- 5.45s 2025-07-12 14:26:26.656553 | orchestrator | Create floating ip address ---------------------------------------------- 5.16s 2025-07-12 14:26:26.656564 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.29s 2025-07-12 14:26:26.656574 | orchestrator | Create test server group ------------------------------------------------ 4.24s 2025-07-12 14:26:26.656585 | orchestrator | Create test user -------------------------------------------------------- 4.21s 2025-07-12 14:26:26.656596 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.12s 2025-07-12 14:26:26.656607 | orchestrator | Create test project ----------------------------------------------------- 4.00s 2025-07-12 14:26:26.656618 | orchestrator | Create icmp security group ---------------------------------------------- 3.93s 2025-07-12 14:26:26.656629 | orchestrator | Create test-admin user -------------------------------------------------- 3.93s 2025-07-12 14:26:26.656639 | orchestrator | Create test keypair ----------------------------------------------------- 3.85s 2025-07-12 14:26:26.656656 | orchestrator | Create test domain ------------------------------------------------------ 3.41s 2025-07-12 14:26:26.656667 | orchestrator | Print floating ip address ----------------------------------------------- 0.05s 2025-07-12 14:26:26.931453 | orchestrator | + server_list 2025-07-12 14:26:26.931566 | orchestrator | + openstack --os-cloud test server list 2025-07-12 14:26:30.661468 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-07-12 14:26:30.661652 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-07-12 14:26:30.661697 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-07-12 14:26:30.661709 | orchestrator | | 4883bf9d-d142-4a52-bd7e-4ee2ad886b63 | test-4 | ACTIVE | auto_allocated_network=10.42.0.52, 192.168.112.147 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-12 14:26:30.661720 | orchestrator | | 53574549-94db-49ab-807e-0a3bf15d6575 | test-3 | ACTIVE | auto_allocated_network=10.42.0.15, 192.168.112.120 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-12 14:26:30.661731 | orchestrator | | b5d3b888-7b2d-4ff2-96b2-10c60cc2c813 | test-2 | ACTIVE | auto_allocated_network=10.42.0.37, 192.168.112.144 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-12 14:26:30.661742 | orchestrator | | 785124fa-a027-4acf-b85e-878a479915a1 | test-1 | ACTIVE | auto_allocated_network=10.42.0.23, 192.168.112.195 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-12 14:26:30.661753 | orchestrator | | 9c1ec1ed-7770-4a79-a42f-1affa42708a6 | test | ACTIVE | auto_allocated_network=10.42.0.7, 192.168.112.153 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-12 14:26:30.661764 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-07-12 14:26:30.940167 | orchestrator | + openstack --os-cloud test server show test 2025-07-12 14:26:34.406237 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 14:26:34.406347 | orchestrator | | Field | Value | 2025-07-12 14:26:34.406370 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 14:26:34.406382 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-12 14:26:34.406393 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-12 14:26:34.406405 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-12 14:26:34.406416 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-07-12 14:26:34.406427 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-12 14:26:34.406457 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-12 14:26:34.406469 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-12 14:26:34.406480 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-12 14:26:34.406508 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-12 14:26:34.406520 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-12 14:26:34.406540 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-12 14:26:34.406552 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-12 14:26:34.406563 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-12 14:26:34.406575 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-12 14:26:34.406585 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-12 14:26:34.406597 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-12T14:22:26.000000 | 2025-07-12 14:26:34.406620 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-12 14:26:34.406631 | orchestrator | | accessIPv4 | | 2025-07-12 14:26:34.406642 | orchestrator | | accessIPv6 | | 2025-07-12 14:26:34.406653 | orchestrator | | addresses | auto_allocated_network=10.42.0.7, 192.168.112.153 | 2025-07-12 14:26:34.406672 | orchestrator | | config_drive | | 2025-07-12 14:26:34.406686 | orchestrator | | created | 2025-07-12T14:22:02Z | 2025-07-12 14:26:34.406699 | orchestrator | | description | None | 2025-07-12 14:26:34.406711 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-12 14:26:34.406724 | orchestrator | | hostId | f35030450f2e5568c419d020931544adba55361325c99d6c90f97284 | 2025-07-12 14:26:34.406736 | orchestrator | | host_status | None | 2025-07-12 14:26:34.406756 | orchestrator | | id | 9c1ec1ed-7770-4a79-a42f-1affa42708a6 | 2025-07-12 14:26:34.406773 | orchestrator | | image | Cirros 0.6.2 (9ac8d80c-ff3f-4035-bebc-2d18f3d2ea89) | 2025-07-12 14:26:34.406786 | orchestrator | | key_name | test | 2025-07-12 14:26:34.406799 | orchestrator | | locked | False | 2025-07-12 14:26:34.406811 | orchestrator | | locked_reason | None | 2025-07-12 14:26:34.406824 | orchestrator | | name | test | 2025-07-12 14:26:34.406843 | orchestrator | | pinned_availability_zone | None | 2025-07-12 14:26:34.406857 | orchestrator | | progress | 0 | 2025-07-12 14:26:34.406869 | orchestrator | | project_id | 0e8a91e32bc94fdf85c013024f202c69 | 2025-07-12 14:26:34.406882 | orchestrator | | properties | hostname='test' | 2025-07-12 14:26:34.406894 | orchestrator | | security_groups | name='icmp' | 2025-07-12 14:26:34.406913 | orchestrator | | | name='ssh' | 2025-07-12 14:26:34.406926 | orchestrator | | server_groups | None | 2025-07-12 14:26:34.406967 | orchestrator | | status | ACTIVE | 2025-07-12 14:26:34.406982 | orchestrator | | tags | test | 2025-07-12 14:26:34.406994 | orchestrator | | trusted_image_certificates | None | 2025-07-12 14:26:34.407008 | orchestrator | | updated | 2025-07-12T14:25:08Z | 2025-07-12 14:26:34.407026 | orchestrator | | user_id | e22e7bf79872417aaafe4ca3c674a0fa | 2025-07-12 14:26:34.407038 | orchestrator | | volumes_attached | delete_on_termination='False', id='cbf6583d-885d-4ad4-b066-a0eeb5b3f197' | 2025-07-12 14:26:34.411780 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 14:26:34.699897 | orchestrator | + openstack --os-cloud test server show test-1 2025-07-12 14:26:37.906567 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 14:26:37.906704 | orchestrator | | Field | Value | 2025-07-12 14:26:37.906752 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 14:26:37.906765 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-12 14:26:37.906777 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-12 14:26:37.906806 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-12 14:26:37.906818 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-07-12 14:26:37.906829 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-12 14:26:37.906841 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-12 14:26:37.906852 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-12 14:26:37.906863 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-12 14:26:37.906894 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-12 14:26:37.906906 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-12 14:26:37.906934 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-12 14:26:37.907004 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-12 14:26:37.907015 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-12 14:26:37.907027 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-12 14:26:37.907039 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-12 14:26:37.907050 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-12T14:23:07.000000 | 2025-07-12 14:26:37.907061 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-12 14:26:37.907073 | orchestrator | | accessIPv4 | | 2025-07-12 14:26:37.907086 | orchestrator | | accessIPv6 | | 2025-07-12 14:26:37.907099 | orchestrator | | addresses | auto_allocated_network=10.42.0.23, 192.168.112.195 | 2025-07-12 14:26:37.907127 | orchestrator | | config_drive | | 2025-07-12 14:26:37.907149 | orchestrator | | created | 2025-07-12T14:22:46Z | 2025-07-12 14:26:37.907162 | orchestrator | | description | None | 2025-07-12 14:26:37.907174 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-12 14:26:37.907187 | orchestrator | | hostId | b8b36323315e2b9efd256883ef553869924eb85d524dbf5e01dbaa58 | 2025-07-12 14:26:37.907204 | orchestrator | | host_status | None | 2025-07-12 14:26:37.907217 | orchestrator | | id | 785124fa-a027-4acf-b85e-878a479915a1 | 2025-07-12 14:26:37.907229 | orchestrator | | image | Cirros 0.6.2 (9ac8d80c-ff3f-4035-bebc-2d18f3d2ea89) | 2025-07-12 14:26:37.907242 | orchestrator | | key_name | test | 2025-07-12 14:26:37.907254 | orchestrator | | locked | False | 2025-07-12 14:26:37.907267 | orchestrator | | locked_reason | None | 2025-07-12 14:26:37.907287 | orchestrator | | name | test-1 | 2025-07-12 14:26:37.907306 | orchestrator | | pinned_availability_zone | None | 2025-07-12 14:26:37.907319 | orchestrator | | progress | 0 | 2025-07-12 14:26:37.907332 | orchestrator | | project_id | 0e8a91e32bc94fdf85c013024f202c69 | 2025-07-12 14:26:37.907344 | orchestrator | | properties | hostname='test-1' | 2025-07-12 14:26:37.907356 | orchestrator | | security_groups | name='icmp' | 2025-07-12 14:26:37.907374 | orchestrator | | | name='ssh' | 2025-07-12 14:26:37.907386 | orchestrator | | server_groups | None | 2025-07-12 14:26:37.907399 | orchestrator | | status | ACTIVE | 2025-07-12 14:26:37.907411 | orchestrator | | tags | test | 2025-07-12 14:26:37.907423 | orchestrator | | trusted_image_certificates | None | 2025-07-12 14:26:37.907443 | orchestrator | | updated | 2025-07-12T14:25:12Z | 2025-07-12 14:26:37.907461 | orchestrator | | user_id | e22e7bf79872417aaafe4ca3c674a0fa | 2025-07-12 14:26:37.907472 | orchestrator | | volumes_attached | | 2025-07-12 14:26:37.911165 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 14:26:38.173051 | orchestrator | + openstack --os-cloud test server show test-2 2025-07-12 14:26:41.274697 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 14:26:41.274838 | orchestrator | | Field | Value | 2025-07-12 14:26:41.274875 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 14:26:41.274888 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-12 14:26:41.274900 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-12 14:26:41.274938 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-12 14:26:41.275001 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-07-12 14:26:41.275042 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-12 14:26:41.275055 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-12 14:26:41.275066 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-12 14:26:41.275077 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-12 14:26:41.275111 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-12 14:26:41.275123 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-12 14:26:41.275135 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-12 14:26:41.275152 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-12 14:26:41.275164 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-12 14:26:41.275176 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-12 14:26:41.275202 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-12 14:26:41.275215 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-12T14:23:46.000000 | 2025-07-12 14:26:41.275228 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-12 14:26:41.275241 | orchestrator | | accessIPv4 | | 2025-07-12 14:26:41.275254 | orchestrator | | accessIPv6 | | 2025-07-12 14:26:41.275267 | orchestrator | | addresses | auto_allocated_network=10.42.0.37, 192.168.112.144 | 2025-07-12 14:26:41.275286 | orchestrator | | config_drive | | 2025-07-12 14:26:41.275299 | orchestrator | | created | 2025-07-12T14:23:25Z | 2025-07-12 14:26:41.275317 | orchestrator | | description | None | 2025-07-12 14:26:41.275330 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-12 14:26:41.275343 | orchestrator | | hostId | ec3c9111184623ce58f45f52ebeb6011d68d25bee5365f65521515cf | 2025-07-12 14:26:41.275364 | orchestrator | | host_status | None | 2025-07-12 14:26:41.275377 | orchestrator | | id | b5d3b888-7b2d-4ff2-96b2-10c60cc2c813 | 2025-07-12 14:26:41.275390 | orchestrator | | image | Cirros 0.6.2 (9ac8d80c-ff3f-4035-bebc-2d18f3d2ea89) | 2025-07-12 14:26:41.275402 | orchestrator | | key_name | test | 2025-07-12 14:26:41.275415 | orchestrator | | locked | False | 2025-07-12 14:26:41.275428 | orchestrator | | locked_reason | None | 2025-07-12 14:26:41.275441 | orchestrator | | name | test-2 | 2025-07-12 14:26:41.275460 | orchestrator | | pinned_availability_zone | None | 2025-07-12 14:26:41.275473 | orchestrator | | progress | 0 | 2025-07-12 14:26:41.275486 | orchestrator | | project_id | 0e8a91e32bc94fdf85c013024f202c69 | 2025-07-12 14:26:41.275499 | orchestrator | | properties | hostname='test-2' | 2025-07-12 14:26:41.275519 | orchestrator | | security_groups | name='icmp' | 2025-07-12 14:26:41.275532 | orchestrator | | | name='ssh' | 2025-07-12 14:26:41.275545 | orchestrator | | server_groups | None | 2025-07-12 14:26:41.275564 | orchestrator | | status | ACTIVE | 2025-07-12 14:26:41.275576 | orchestrator | | tags | test | 2025-07-12 14:26:41.275587 | orchestrator | | trusted_image_certificates | None | 2025-07-12 14:26:41.275599 | orchestrator | | updated | 2025-07-12T14:25:17Z | 2025-07-12 14:26:41.275615 | orchestrator | | user_id | e22e7bf79872417aaafe4ca3c674a0fa | 2025-07-12 14:26:41.275627 | orchestrator | | volumes_attached | | 2025-07-12 14:26:41.278787 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 14:26:41.568711 | orchestrator | + openstack --os-cloud test server show test-3 2025-07-12 14:26:44.576498 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 14:26:44.576610 | orchestrator | | Field | Value | 2025-07-12 14:26:44.576627 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 14:26:44.576640 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-12 14:26:44.576651 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-12 14:26:44.576663 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-12 14:26:44.576675 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-07-12 14:26:44.576686 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-12 14:26:44.576698 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-12 14:26:44.576710 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-12 14:26:44.576721 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-12 14:26:44.576787 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-12 14:26:44.576803 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-12 14:26:44.576814 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-12 14:26:44.576826 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-12 14:26:44.576837 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-12 14:26:44.576849 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-12 14:26:44.576860 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-12 14:26:44.576872 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-12T14:24:19.000000 | 2025-07-12 14:26:44.576883 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-12 14:26:44.576895 | orchestrator | | accessIPv4 | | 2025-07-12 14:26:44.576906 | orchestrator | | accessIPv6 | | 2025-07-12 14:26:44.576926 | orchestrator | | addresses | auto_allocated_network=10.42.0.15, 192.168.112.120 | 2025-07-12 14:26:44.577012 | orchestrator | | config_drive | | 2025-07-12 14:26:44.577028 | orchestrator | | created | 2025-07-12T14:24:03Z | 2025-07-12 14:26:44.577041 | orchestrator | | description | None | 2025-07-12 14:26:44.577054 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-12 14:26:44.577067 | orchestrator | | hostId | f35030450f2e5568c419d020931544adba55361325c99d6c90f97284 | 2025-07-12 14:26:44.577079 | orchestrator | | host_status | None | 2025-07-12 14:26:44.577091 | orchestrator | | id | 53574549-94db-49ab-807e-0a3bf15d6575 | 2025-07-12 14:26:44.577104 | orchestrator | | image | Cirros 0.6.2 (9ac8d80c-ff3f-4035-bebc-2d18f3d2ea89) | 2025-07-12 14:26:44.577116 | orchestrator | | key_name | test | 2025-07-12 14:26:44.577129 | orchestrator | | locked | False | 2025-07-12 14:26:44.577150 | orchestrator | | locked_reason | None | 2025-07-12 14:26:44.577162 | orchestrator | | name | test-3 | 2025-07-12 14:26:44.577186 | orchestrator | | pinned_availability_zone | None | 2025-07-12 14:26:44.577201 | orchestrator | | progress | 0 | 2025-07-12 14:26:44.577214 | orchestrator | | project_id | 0e8a91e32bc94fdf85c013024f202c69 | 2025-07-12 14:26:44.577226 | orchestrator | | properties | hostname='test-3' | 2025-07-12 14:26:44.577238 | orchestrator | | security_groups | name='icmp' | 2025-07-12 14:26:44.577250 | orchestrator | | | name='ssh' | 2025-07-12 14:26:44.577262 | orchestrator | | server_groups | None | 2025-07-12 14:26:44.577274 | orchestrator | | status | ACTIVE | 2025-07-12 14:26:44.577286 | orchestrator | | tags | test | 2025-07-12 14:26:44.577305 | orchestrator | | trusted_image_certificates | None | 2025-07-12 14:26:44.577317 | orchestrator | | updated | 2025-07-12T14:25:22Z | 2025-07-12 14:26:44.577340 | orchestrator | | user_id | e22e7bf79872417aaafe4ca3c674a0fa | 2025-07-12 14:26:44.577353 | orchestrator | | volumes_attached | | 2025-07-12 14:26:44.580986 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 14:26:44.840223 | orchestrator | + openstack --os-cloud test server show test-4 2025-07-12 14:26:48.409029 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 14:26:48.409140 | orchestrator | | Field | Value | 2025-07-12 14:26:48.409157 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 14:26:48.409169 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-12 14:26:48.409181 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-12 14:26:48.409217 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-12 14:26:48.409229 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-07-12 14:26:48.409240 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-12 14:26:48.409251 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-12 14:26:48.409263 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-12 14:26:48.409274 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-12 14:26:48.409304 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-12 14:26:48.409316 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-12 14:26:48.409327 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-12 14:26:48.409338 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-12 14:26:48.409349 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-12 14:26:48.409369 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-12 14:26:48.409380 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-12 14:26:48.409391 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-12T14:24:52.000000 | 2025-07-12 14:26:48.409402 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-12 14:26:48.409430 | orchestrator | | accessIPv4 | | 2025-07-12 14:26:48.409446 | orchestrator | | accessIPv6 | | 2025-07-12 14:26:48.409458 | orchestrator | | addresses | auto_allocated_network=10.42.0.52, 192.168.112.147 | 2025-07-12 14:26:48.409476 | orchestrator | | config_drive | | 2025-07-12 14:26:48.409488 | orchestrator | | created | 2025-07-12T14:24:36Z | 2025-07-12 14:26:48.409499 | orchestrator | | description | None | 2025-07-12 14:26:48.409510 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-12 14:26:48.409528 | orchestrator | | hostId | ec3c9111184623ce58f45f52ebeb6011d68d25bee5365f65521515cf | 2025-07-12 14:26:48.409541 | orchestrator | | host_status | None | 2025-07-12 14:26:48.409554 | orchestrator | | id | 4883bf9d-d142-4a52-bd7e-4ee2ad886b63 | 2025-07-12 14:26:48.409566 | orchestrator | | image | Cirros 0.6.2 (9ac8d80c-ff3f-4035-bebc-2d18f3d2ea89) | 2025-07-12 14:26:48.409578 | orchestrator | | key_name | test | 2025-07-12 14:26:48.409596 | orchestrator | | locked | False | 2025-07-12 14:26:48.409608 | orchestrator | | locked_reason | None | 2025-07-12 14:26:48.409622 | orchestrator | | name | test-4 | 2025-07-12 14:26:48.409640 | orchestrator | | pinned_availability_zone | None | 2025-07-12 14:26:48.409654 | orchestrator | | progress | 0 | 2025-07-12 14:26:48.409673 | orchestrator | | project_id | 0e8a91e32bc94fdf85c013024f202c69 | 2025-07-12 14:26:48.409686 | orchestrator | | properties | hostname='test-4' | 2025-07-12 14:26:48.409698 | orchestrator | | security_groups | name='icmp' | 2025-07-12 14:26:48.409710 | orchestrator | | | name='ssh' | 2025-07-12 14:26:48.409723 | orchestrator | | server_groups | None | 2025-07-12 14:26:48.409735 | orchestrator | | status | ACTIVE | 2025-07-12 14:26:48.409748 | orchestrator | | tags | test | 2025-07-12 14:26:48.409765 | orchestrator | | trusted_image_certificates | None | 2025-07-12 14:26:48.409778 | orchestrator | | updated | 2025-07-12T14:25:26Z | 2025-07-12 14:26:48.409796 | orchestrator | | user_id | e22e7bf79872417aaafe4ca3c674a0fa | 2025-07-12 14:26:48.409809 | orchestrator | | volumes_attached | | 2025-07-12 14:26:48.414582 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 14:26:48.694942 | orchestrator | + server_ping 2025-07-12 14:26:48.696056 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-07-12 14:26:48.696117 | orchestrator | ++ tr -d '\r' 2025-07-12 14:26:51.550304 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 14:26:51.550408 | orchestrator | + ping -c3 192.168.112.195 2025-07-12 14:26:51.566293 | orchestrator | PING 192.168.112.195 (192.168.112.195) 56(84) bytes of data. 2025-07-12 14:26:51.566369 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=1 ttl=63 time=10.1 ms 2025-07-12 14:26:52.560369 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=2 ttl=63 time=2.57 ms 2025-07-12 14:26:53.560778 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=3 ttl=63 time=2.24 ms 2025-07-12 14:26:53.560875 | orchestrator | 2025-07-12 14:26:53.560892 | orchestrator | --- 192.168.112.195 ping statistics --- 2025-07-12 14:26:53.560906 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-07-12 14:26:53.560917 | orchestrator | rtt min/avg/max/mdev = 2.244/4.985/10.146/3.651 ms 2025-07-12 14:26:53.561840 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 14:26:53.561865 | orchestrator | + ping -c3 192.168.112.153 2025-07-12 14:26:53.577199 | orchestrator | PING 192.168.112.153 (192.168.112.153) 56(84) bytes of data. 2025-07-12 14:26:53.577244 | orchestrator | 64 bytes from 192.168.112.153: icmp_seq=1 ttl=63 time=11.4 ms 2025-07-12 14:26:54.570678 | orchestrator | 64 bytes from 192.168.112.153: icmp_seq=2 ttl=63 time=3.17 ms 2025-07-12 14:26:55.571160 | orchestrator | 64 bytes from 192.168.112.153: icmp_seq=3 ttl=63 time=2.26 ms 2025-07-12 14:26:55.571264 | orchestrator | 2025-07-12 14:26:55.571280 | orchestrator | --- 192.168.112.153 ping statistics --- 2025-07-12 14:26:55.571294 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 14:26:55.571306 | orchestrator | rtt min/avg/max/mdev = 2.256/5.618/11.430/4.126 ms 2025-07-12 14:26:55.571788 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 14:26:55.571815 | orchestrator | + ping -c3 192.168.112.147 2025-07-12 14:26:55.585226 | orchestrator | PING 192.168.112.147 (192.168.112.147) 56(84) bytes of data. 2025-07-12 14:26:55.585269 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=1 ttl=63 time=8.51 ms 2025-07-12 14:26:56.581366 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=2 ttl=63 time=2.83 ms 2025-07-12 14:26:57.582871 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=3 ttl=63 time=1.89 ms 2025-07-12 14:26:57.583038 | orchestrator | 2025-07-12 14:26:57.583062 | orchestrator | --- 192.168.112.147 ping statistics --- 2025-07-12 14:26:57.583084 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-07-12 14:26:57.583099 | orchestrator | rtt min/avg/max/mdev = 1.889/4.410/8.513/2.926 ms 2025-07-12 14:26:57.583582 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 14:26:57.583607 | orchestrator | + ping -c3 192.168.112.144 2025-07-12 14:26:57.594687 | orchestrator | PING 192.168.112.144 (192.168.112.144) 56(84) bytes of data. 2025-07-12 14:26:57.594743 | orchestrator | 64 bytes from 192.168.112.144: icmp_seq=1 ttl=63 time=8.26 ms 2025-07-12 14:26:58.590352 | orchestrator | 64 bytes from 192.168.112.144: icmp_seq=2 ttl=63 time=2.27 ms 2025-07-12 14:26:59.592097 | orchestrator | 64 bytes from 192.168.112.144: icmp_seq=3 ttl=63 time=1.85 ms 2025-07-12 14:26:59.592205 | orchestrator | 2025-07-12 14:26:59.592221 | orchestrator | --- 192.168.112.144 ping statistics --- 2025-07-12 14:26:59.592234 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 14:26:59.592246 | orchestrator | rtt min/avg/max/mdev = 1.850/4.128/8.263/2.928 ms 2025-07-12 14:26:59.592506 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 14:26:59.592570 | orchestrator | + ping -c3 192.168.112.120 2025-07-12 14:26:59.604869 | orchestrator | PING 192.168.112.120 (192.168.112.120) 56(84) bytes of data. 2025-07-12 14:26:59.604912 | orchestrator | 64 bytes from 192.168.112.120: icmp_seq=1 ttl=63 time=8.02 ms 2025-07-12 14:27:00.599507 | orchestrator | 64 bytes from 192.168.112.120: icmp_seq=2 ttl=63 time=2.11 ms 2025-07-12 14:27:01.601116 | orchestrator | 64 bytes from 192.168.112.120: icmp_seq=3 ttl=63 time=2.18 ms 2025-07-12 14:27:01.601171 | orchestrator | 2025-07-12 14:27:01.601185 | orchestrator | --- 192.168.112.120 ping statistics --- 2025-07-12 14:27:01.601198 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-07-12 14:27:01.601209 | orchestrator | rtt min/avg/max/mdev = 2.110/4.103/8.018/2.768 ms 2025-07-12 14:27:01.601697 | orchestrator | + [[ 9.2.0 == \l\a\t\e\s\t ]] 2025-07-12 14:27:01.891663 | orchestrator | ok: Runtime: 0:10:53.421425 2025-07-12 14:27:01.939663 | 2025-07-12 14:27:01.939786 | TASK [Run tempest] 2025-07-12 14:27:02.473000 | orchestrator | skipping: Conditional result was False 2025-07-12 14:27:02.491396 | 2025-07-12 14:27:02.491701 | TASK [Check prometheus alert status] 2025-07-12 14:27:03.045590 | orchestrator | skipping: Conditional result was False 2025-07-12 14:27:03.048875 | 2025-07-12 14:27:03.049056 | PLAY RECAP 2025-07-12 14:27:03.049205 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-07-12 14:27:03.049281 | 2025-07-12 14:27:03.272219 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-07-12 14:27:03.275032 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-07-12 14:27:04.043519 | 2025-07-12 14:27:04.043689 | PLAY [Post output play] 2025-07-12 14:27:04.059886 | 2025-07-12 14:27:04.060034 | LOOP [stage-output : Register sources] 2025-07-12 14:27:04.129544 | 2025-07-12 14:27:04.129850 | TASK [stage-output : Check sudo] 2025-07-12 14:27:04.979080 | orchestrator | sudo: a password is required 2025-07-12 14:27:05.167629 | orchestrator | ok: Runtime: 0:00:00.017020 2025-07-12 14:27:05.182029 | 2025-07-12 14:27:05.182196 | LOOP [stage-output : Set source and destination for files and folders] 2025-07-12 14:27:05.220863 | 2025-07-12 14:27:05.221214 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-07-12 14:27:05.301287 | orchestrator | ok 2025-07-12 14:27:05.309991 | 2025-07-12 14:27:05.310130 | LOOP [stage-output : Ensure target folders exist] 2025-07-12 14:27:05.775968 | orchestrator | ok: "docs" 2025-07-12 14:27:05.776499 | 2025-07-12 14:27:06.004637 | orchestrator | ok: "artifacts" 2025-07-12 14:27:06.252892 | orchestrator | ok: "logs" 2025-07-12 14:27:06.273783 | 2025-07-12 14:27:06.273997 | LOOP [stage-output : Copy files and folders to staging folder] 2025-07-12 14:27:06.313444 | 2025-07-12 14:27:06.313763 | TASK [stage-output : Make all log files readable] 2025-07-12 14:27:06.593646 | orchestrator | ok 2025-07-12 14:27:06.602751 | 2025-07-12 14:27:06.602931 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-07-12 14:27:06.637831 | orchestrator | skipping: Conditional result was False 2025-07-12 14:27:06.654484 | 2025-07-12 14:27:06.654674 | TASK [stage-output : Discover log files for compression] 2025-07-12 14:27:06.679670 | orchestrator | skipping: Conditional result was False 2025-07-12 14:27:06.694672 | 2025-07-12 14:27:06.694903 | LOOP [stage-output : Archive everything from logs] 2025-07-12 14:27:06.746810 | 2025-07-12 14:27:06.747060 | PLAY [Post cleanup play] 2025-07-12 14:27:06.756086 | 2025-07-12 14:27:06.756227 | TASK [Set cloud fact (Zuul deployment)] 2025-07-12 14:27:06.813256 | orchestrator | ok 2025-07-12 14:27:06.825776 | 2025-07-12 14:27:06.825929 | TASK [Set cloud fact (local deployment)] 2025-07-12 14:27:06.861512 | orchestrator | skipping: Conditional result was False 2025-07-12 14:27:06.878654 | 2025-07-12 14:27:06.878900 | TASK [Clean the cloud environment] 2025-07-12 14:27:07.888083 | orchestrator | 2025-07-12 14:27:07 - clean up servers 2025-07-12 14:27:08.686786 | orchestrator | 2025-07-12 14:27:08 - testbed-manager 2025-07-12 14:27:08.775295 | orchestrator | 2025-07-12 14:27:08 - testbed-node-4 2025-07-12 14:27:08.866265 | orchestrator | 2025-07-12 14:27:08 - testbed-node-2 2025-07-12 14:27:08.965923 | orchestrator | 2025-07-12 14:27:08 - testbed-node-3 2025-07-12 14:27:09.070171 | orchestrator | 2025-07-12 14:27:09 - testbed-node-5 2025-07-12 14:27:09.387277 | orchestrator | 2025-07-12 14:27:09 - testbed-node-1 2025-07-12 14:27:09.482562 | orchestrator | 2025-07-12 14:27:09 - testbed-node-0 2025-07-12 14:27:09.576693 | orchestrator | 2025-07-12 14:27:09 - clean up keypairs 2025-07-12 14:27:09.597082 | orchestrator | 2025-07-12 14:27:09 - testbed 2025-07-12 14:27:09.625457 | orchestrator | 2025-07-12 14:27:09 - wait for servers to be gone 2025-07-12 14:27:20.769522 | orchestrator | 2025-07-12 14:27:20 - clean up ports 2025-07-12 14:27:20.952174 | orchestrator | 2025-07-12 14:27:20 - 039cdd73-1cec-4443-8b58-e83baae916bf 2025-07-12 14:27:21.188269 | orchestrator | 2025-07-12 14:27:21 - 29daab07-71bf-40ad-8aef-1ed1e51fbeaf 2025-07-12 14:27:21.435093 | orchestrator | 2025-07-12 14:27:21 - 38e2dca7-4040-42a2-98f0-610cc4b0a2f0 2025-07-12 14:27:21.671606 | orchestrator | 2025-07-12 14:27:21 - 50ef6906-9471-45e8-8695-e372eeeba3d0 2025-07-12 14:27:22.047686 | orchestrator | 2025-07-12 14:27:22 - 75b40426-0ad1-45bf-a81b-2ca7c5e67550 2025-07-12 14:27:22.259029 | orchestrator | 2025-07-12 14:27:22 - ec920d31-8206-4107-bb7a-137404b1ccc3 2025-07-12 14:27:22.477448 | orchestrator | 2025-07-12 14:27:22 - ecc06b27-4f0d-45ff-beae-3999d200179f 2025-07-12 14:27:22.672749 | orchestrator | 2025-07-12 14:27:22 - clean up volumes 2025-07-12 14:27:22.795428 | orchestrator | 2025-07-12 14:27:22 - testbed-volume-5-node-base 2025-07-12 14:27:22.834053 | orchestrator | 2025-07-12 14:27:22 - testbed-volume-1-node-base 2025-07-12 14:27:22.869310 | orchestrator | 2025-07-12 14:27:22 - testbed-volume-0-node-base 2025-07-12 14:27:22.905848 | orchestrator | 2025-07-12 14:27:22 - testbed-volume-4-node-base 2025-07-12 14:27:22.946263 | orchestrator | 2025-07-12 14:27:22 - testbed-volume-3-node-base 2025-07-12 14:27:22.987026 | orchestrator | 2025-07-12 14:27:22 - testbed-volume-2-node-base 2025-07-12 14:27:23.025448 | orchestrator | 2025-07-12 14:27:23 - testbed-volume-manager-base 2025-07-12 14:27:23.067300 | orchestrator | 2025-07-12 14:27:23 - testbed-volume-6-node-3 2025-07-12 14:27:23.111878 | orchestrator | 2025-07-12 14:27:23 - testbed-volume-0-node-3 2025-07-12 14:27:23.157781 | orchestrator | 2025-07-12 14:27:23 - testbed-volume-3-node-3 2025-07-12 14:27:23.198639 | orchestrator | 2025-07-12 14:27:23 - testbed-volume-7-node-4 2025-07-12 14:27:23.241322 | orchestrator | 2025-07-12 14:27:23 - testbed-volume-2-node-5 2025-07-12 14:27:23.290904 | orchestrator | 2025-07-12 14:27:23 - testbed-volume-4-node-4 2025-07-12 14:27:23.333166 | orchestrator | 2025-07-12 14:27:23 - testbed-volume-8-node-5 2025-07-12 14:27:23.377151 | orchestrator | 2025-07-12 14:27:23 - testbed-volume-5-node-5 2025-07-12 14:27:23.416270 | orchestrator | 2025-07-12 14:27:23 - testbed-volume-1-node-4 2025-07-12 14:27:23.457787 | orchestrator | 2025-07-12 14:27:23 - disconnect routers 2025-07-12 14:27:23.573456 | orchestrator | 2025-07-12 14:27:23 - testbed 2025-07-12 14:27:24.910343 | orchestrator | 2025-07-12 14:27:24 - clean up subnets 2025-07-12 14:27:24.949069 | orchestrator | 2025-07-12 14:27:24 - subnet-testbed-management 2025-07-12 14:27:25.113702 | orchestrator | 2025-07-12 14:27:25 - clean up networks 2025-07-12 14:27:25.290414 | orchestrator | 2025-07-12 14:27:25 - net-testbed-management 2025-07-12 14:27:25.593875 | orchestrator | 2025-07-12 14:27:25 - clean up security groups 2025-07-12 14:27:25.634734 | orchestrator | 2025-07-12 14:27:25 - testbed-management 2025-07-12 14:27:25.751649 | orchestrator | 2025-07-12 14:27:25 - testbed-node 2025-07-12 14:27:25.859901 | orchestrator | 2025-07-12 14:27:25 - clean up floating ips 2025-07-12 14:27:25.892910 | orchestrator | 2025-07-12 14:27:25 - 81.163.193.180 2025-07-12 14:27:26.274285 | orchestrator | 2025-07-12 14:27:26 - clean up routers 2025-07-12 14:27:26.382256 | orchestrator | 2025-07-12 14:27:26 - testbed 2025-07-12 14:27:27.951388 | orchestrator | ok: Runtime: 0:00:20.450219 2025-07-12 14:27:27.955741 | 2025-07-12 14:27:27.955915 | PLAY RECAP 2025-07-12 14:27:27.956038 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-07-12 14:27:27.956098 | 2025-07-12 14:27:28.090059 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-07-12 14:27:28.091100 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-07-12 14:27:28.835627 | 2025-07-12 14:27:28.835792 | PLAY [Cleanup play] 2025-07-12 14:27:28.852047 | 2025-07-12 14:27:28.852186 | TASK [Set cloud fact (Zuul deployment)] 2025-07-12 14:27:28.905546 | orchestrator | ok 2025-07-12 14:27:28.913147 | 2025-07-12 14:27:28.913289 | TASK [Set cloud fact (local deployment)] 2025-07-12 14:27:28.937598 | orchestrator | skipping: Conditional result was False 2025-07-12 14:27:28.948842 | 2025-07-12 14:27:28.948977 | TASK [Clean the cloud environment] 2025-07-12 14:27:30.071270 | orchestrator | 2025-07-12 14:27:30 - clean up servers 2025-07-12 14:27:30.536889 | orchestrator | 2025-07-12 14:27:30 - clean up keypairs 2025-07-12 14:27:30.553852 | orchestrator | 2025-07-12 14:27:30 - wait for servers to be gone 2025-07-12 14:27:30.592851 | orchestrator | 2025-07-12 14:27:30 - clean up ports 2025-07-12 14:27:30.665036 | orchestrator | 2025-07-12 14:27:30 - clean up volumes 2025-07-12 14:27:30.724210 | orchestrator | 2025-07-12 14:27:30 - disconnect routers 2025-07-12 14:27:30.756047 | orchestrator | 2025-07-12 14:27:30 - clean up subnets 2025-07-12 14:27:30.772715 | orchestrator | 2025-07-12 14:27:30 - clean up networks 2025-07-12 14:27:30.926903 | orchestrator | 2025-07-12 14:27:30 - clean up security groups 2025-07-12 14:27:30.963414 | orchestrator | 2025-07-12 14:27:30 - clean up floating ips 2025-07-12 14:27:30.986072 | orchestrator | 2025-07-12 14:27:30 - clean up routers 2025-07-12 14:27:31.486577 | orchestrator | ok: Runtime: 0:00:01.281142 2025-07-12 14:27:31.490066 | 2025-07-12 14:27:31.490207 | PLAY RECAP 2025-07-12 14:27:31.490309 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-07-12 14:27:31.490417 | 2025-07-12 14:27:31.623291 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-07-12 14:27:31.625785 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-07-12 14:27:32.381307 | 2025-07-12 14:27:32.381519 | PLAY [Base post-fetch] 2025-07-12 14:27:32.397467 | 2025-07-12 14:27:32.397625 | TASK [fetch-output : Set log path for multiple nodes] 2025-07-12 14:27:32.453110 | orchestrator | skipping: Conditional result was False 2025-07-12 14:27:32.468577 | 2025-07-12 14:27:32.468833 | TASK [fetch-output : Set log path for single node] 2025-07-12 14:27:32.520075 | orchestrator | ok 2025-07-12 14:27:32.529524 | 2025-07-12 14:27:32.529688 | LOOP [fetch-output : Ensure local output dirs] 2025-07-12 14:27:32.994964 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/40f5487bf9f54cd38fd17208779020e4/work/logs" 2025-07-12 14:27:33.266939 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/40f5487bf9f54cd38fd17208779020e4/work/artifacts" 2025-07-12 14:27:33.544621 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/40f5487bf9f54cd38fd17208779020e4/work/docs" 2025-07-12 14:27:33.569916 | 2025-07-12 14:27:33.570128 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-07-12 14:27:34.502221 | orchestrator | changed: .d..t...... ./ 2025-07-12 14:27:34.502589 | orchestrator | changed: All items complete 2025-07-12 14:27:34.502655 | 2025-07-12 14:27:35.251059 | orchestrator | changed: .d..t...... ./ 2025-07-12 14:27:35.964652 | orchestrator | changed: .d..t...... ./ 2025-07-12 14:27:35.994809 | 2025-07-12 14:27:35.995731 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-07-12 14:27:36.527813 | orchestrator -> localhost | ok: Item: artifacts Runtime: 0:00:00.021070 2025-07-12 14:27:36.805026 | orchestrator -> localhost | ok: Item: docs Runtime: 0:00:00.010314 2025-07-12 14:27:36.829668 | 2025-07-12 14:27:36.829859 | PLAY RECAP 2025-07-12 14:27:36.829925 | orchestrator | ok: 4 changed: 3 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-07-12 14:27:36.829958 | 2025-07-12 14:27:36.960008 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-07-12 14:27:36.962034 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-07-12 14:27:37.710611 | 2025-07-12 14:27:37.710777 | PLAY [Base post] 2025-07-12 14:27:37.725386 | 2025-07-12 14:27:37.725528 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-07-12 14:27:38.801714 | orchestrator | changed 2025-07-12 14:27:38.812457 | 2025-07-12 14:27:38.812590 | PLAY RECAP 2025-07-12 14:27:38.812670 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-07-12 14:27:38.812747 | 2025-07-12 14:27:38.940093 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-07-12 14:27:38.941142 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-07-12 14:27:39.731326 | 2025-07-12 14:27:39.731527 | PLAY [Base post-logs] 2025-07-12 14:27:39.742462 | 2025-07-12 14:27:39.742612 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-07-12 14:27:40.228689 | localhost | changed 2025-07-12 14:27:40.245190 | 2025-07-12 14:27:40.245403 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-07-12 14:27:40.272847 | localhost | ok 2025-07-12 14:27:40.277840 | 2025-07-12 14:27:40.277981 | TASK [Set zuul-log-path fact] 2025-07-12 14:27:40.295228 | localhost | ok 2025-07-12 14:27:40.305845 | 2025-07-12 14:27:40.305971 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-07-12 14:27:40.342493 | localhost | ok 2025-07-12 14:27:40.348162 | 2025-07-12 14:27:40.348327 | TASK [upload-logs : Create log directories] 2025-07-12 14:27:40.865933 | localhost | changed 2025-07-12 14:27:40.869996 | 2025-07-12 14:27:40.870137 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-07-12 14:27:41.384293 | localhost -> localhost | ok: Runtime: 0:00:00.007963 2025-07-12 14:27:41.393777 | 2025-07-12 14:27:41.393973 | TASK [upload-logs : Upload logs to log server] 2025-07-12 14:27:41.963828 | localhost | Output suppressed because no_log was given 2025-07-12 14:27:41.968006 | 2025-07-12 14:27:41.968216 | LOOP [upload-logs : Compress console log and json output] 2025-07-12 14:27:42.034232 | localhost | skipping: Conditional result was False 2025-07-12 14:27:42.041386 | localhost | skipping: Conditional result was False 2025-07-12 14:27:42.053295 | 2025-07-12 14:27:42.053561 | LOOP [upload-logs : Upload compressed console log and json output] 2025-07-12 14:27:42.112514 | localhost | skipping: Conditional result was False 2025-07-12 14:27:42.113056 | 2025-07-12 14:27:42.116975 | localhost | skipping: Conditional result was False 2025-07-12 14:27:42.128450 | 2025-07-12 14:27:42.128638 | LOOP [upload-logs : Upload console log and json output]